+ '_ {} | ^^ expected named lifetime parameter | = help: this function's return type contains a borrowed value, but there is no value for it to be borrowed from help: consider using the `'static` lifetime, but this is uncommon unless you're returning a borrowed value from a `const` or a `static`, or if you will only have owned values | LL | fn frob() -> impl Fn
+ 'static {} | ~~~~~~~ error[E0412]: cannot find type `P` in this scope --> $DIR/opaque-used-in-extraneous-argument.rs:5:22 | LL | fn frob() -> impl Fn
+ '_ {} | ^ not found in this scope | help: you might be missing a type parameter | LL | fn frob
() -> impl Fn
+ '_ {} | +++ error[E0412]: cannot find type `T` in this scope --> $DIR/opaque-used-in-extraneous-argument.rs:5:34 | LL | fn frob() -> impl Fn
+ '_ {} | ^ not found in this scope | help: you might be missing a type parameter | LL | fn frob + '_ {} | +++ error[E0658]: the precise format of `Fn`-family traits' type parameters is subject to change --> $DIR/opaque-used-in-extraneous-argument.rs:5:19 | LL | fn frob() -> impl Fn + '_ {} | ^^^^^^^^^^^^^^^^^ help: use parenthetical notation instead: `Fn(P) -> T` | = note: see issue #29625 + '_ {} | ^^^^ error[E0061]: this function takes 0 arguments but 1 argument was supplied --> $DIR/opaque-used-in-extraneous-argument.rs:19:5 | LL | open_parent(&old_path) | ^^^^^^^^^^^ --------- | | | unexpected argument of type `&impl FnOnce<{type error}, Output = {type error}> + Fn<{type error}> + 'static` | help: remove the extra argument | note: function defined here --> $DIR/opaque-used-in-extraneous-argument.rs:11:4 | LL | fn open_parent<'path>() { | ^^^^^^^^^^^ error: aborting due to 6 previous errors Some errors have detailed explanations: E0061, E0106, E0412, E0658. For more information about an error, try `rustc --explain E0061`. ", "commid": "rust_pr_120056"}], "negative_passages": []}
{"query_id": "q-en-rust-8e9923fb018fde38964429005b55eb96bb0e271c23a44c28554fc1275db50c7c", "query": " $DIR/no-gat-position.rs:8:56 | LL | fn next<'a>(&'a mut self) -> Option > { (_: P, _: ()) {} fn f3<'a>(x: &'a ((), &'a mut ())) { f2(|| x.0, f1(x.1)) //~^ ERROR cannot borrow `*x.1` as mutable, as it is behind a `&` reference //~| ERROR cannot borrow `*x.1` as mutable because it is also borrowed as immutable } fn main() {} use hir::ClosureKind; use hir::{ClosureKind, Path}; use rustc_data_structures::captures::Captures; use rustc_data_structures::fx::FxIndexSet; use rustc_errors::{codes::*, struct_span_code_err, Applicability, Diag, MultiSpan};", "commid": "rust_pr_120990"}], "negative_passages": []}
{"query_id": "q-en-rust-dd66f626e30f9a422ef543deab970d3dbd7b6175d2cb3eb638f16edd4a301a95", "query": "No response No response No response\nI thought the intent with was to use it in a passthrough style, so you can insert it into existing expressions ie. becomes . In which case, maybe the lint should be for not using the value resulting from evaluating .\nNot sure about the majority of users but for me is a convenience over . I use it without using the return value almost all of the time.\nProbably a common thing to do, sure, but if we're talking about adding a lint, why not nudge the user in the direction of the intended and documented usage? You could at least do both: \"insert into an existing expression or pass it a borrow\" etc.\nThe fix in will only report a hint when there is already a happened. For example this code: before this fix the error is: with the fix, since there is already a error, the lint will try to find out the original macro, and suggest a borrow: The suggestion is not 100% right, since if we change it to , we need one more extra fix now(the return type is not expecting a reference), but there is a different issue, at least we have already moved one step on. If the original code is: Then suggest a borrow is a great and correct hint.\nSimilar in the scenario of function argument: original error is: new errors from the fix:\nWould not also be a good hint? The suggestion wouldn't compile anyway right? Because would need to change to take ? Without involved, the compiler does indeed suggest that: Once that's addressed (whichever way it's done), once again becomes transparent, and eg. or works. does not, however, and I see why a \"consider borrowing\" hint would be good there. However, if you make the change first and leave as-is, the resulting error does not include the suggestion to change , which is unhelpful. (I agree that the message is even less helpful though, I'm not disputing that.)\nYes, it's a good hint, but from the implementation view of a hint, it's much harder to analyze the following code from the error point and then construct a new expression. Even it seems an easy thing for humans. You are right, the suggestion is not 100% right, we may need an extra follow-up fix after applying the hint, which is also a common thing in rustc diagnostics.", "positive_passages": [{"docid": "doc-en-rust-b0b9863b403080f447e63ae0cf9327cf902a07dddbb167336016e902e8fc32b9", "text": "use rustc_middle::bug; use rustc_middle::hir::nested_filter::OnlyBodies; use rustc_middle::mir::tcx::PlaceTy; use rustc_middle::mir::VarDebugInfoContents; use rustc_middle::mir::{ self, AggregateKind, BindingForm, BorrowKind, CallSource, ClearCrossCrate, ConstraintCategory, FakeBorrowKind, FakeReadCause, LocalDecl, LocalInfo, LocalKind, Location, MutBorrowKind,", "commid": "rust_pr_120990"}], "negative_passages": []}
{"query_id": "q-en-rust-dd66f626e30f9a422ef543deab970d3dbd7b6175d2cb3eb638f16edd4a301a95", "query": "No response No response No response\nI thought the intent with was to use it in a passthrough style, so you can insert it into existing expressions ie. becomes . In which case, maybe the lint should be for not using the value resulting from evaluating .\nNot sure about the majority of users but for me is a convenience over . I use it without using the return value almost all of the time.\nProbably a common thing to do, sure, but if we're talking about adding a lint, why not nudge the user in the direction of the intended and documented usage? You could at least do both: \"insert into an existing expression or pass it a borrow\" etc.\nThe fix in will only report a hint when there is already a happened. For example this code: before this fix the error is: with the fix, since there is already a error, the lint will try to find out the original macro, and suggest a borrow: The suggestion is not 100% right, since if we change it to , we need one more extra fix now(the return type is not expecting a reference), but there is a different issue, at least we have already moved one step on. If the original code is: Then suggest a borrow is a great and correct hint.\nSimilar in the scenario of function argument: original error is: new errors from the fix:\nWould not also be a good hint? The suggestion wouldn't compile anyway right? Because would need to change to take ? Without involved, the compiler does indeed suggest that: Once that's addressed (whichever way it's done), once again becomes transparent, and eg. or works. does not, however, and I see why a \"consider borrowing\" hint would be good there. However, if you make the change first and leave as-is, the resulting error does not include the suggestion to change , which is unhelpful. (I agree that the message is even less helpful though, I'm not disputing that.)\nYes, it's a good hint, but from the implementation view of a hint, it's much harder to analyze the following code from the error point and then construct a new expression. Even it seems an easy thing for humans. You are right, the suggestion is not 100% right, we may need an extra follow-up fix after applying the hint, which is also a common thing in rustc diagnostics.", "positive_passages": [{"docid": "doc-en-rust-a509f91547d230f0edfb1c98b24d4e762d10f8579721a92891c17994ca92a66a", "text": "self.suggest_cloning(err, ty, expr, None, Some(move_spans)); } } if let Some(pat) = finder.pat { self.suggest_ref_for_dbg_args(expr, place, move_span, err); // it's useless to suggest inserting `ref` when the span don't comes from local code if let Some(pat) = finder.pat && !move_span.is_dummy() && !self.infcx.tcx.sess.source_map().is_imported(move_span) { *in_pattern = true; let mut sugg = vec![(pat.span.shrink_to_lo(), \"ref \".to_string())]; if let Some(pat) = finder.parent_pat {", "commid": "rust_pr_120990"}], "negative_passages": []}
{"query_id": "q-en-rust-dd66f626e30f9a422ef543deab970d3dbd7b6175d2cb3eb638f16edd4a301a95", "query": "No response No response No response\nI thought the intent with was to use it in a passthrough style, so you can insert it into existing expressions ie. becomes . In which case, maybe the lint should be for not using the value resulting from evaluating .\nNot sure about the majority of users but for me is a convenience over . I use it without using the return value almost all of the time.\nProbably a common thing to do, sure, but if we're talking about adding a lint, why not nudge the user in the direction of the intended and documented usage? You could at least do both: \"insert into an existing expression or pass it a borrow\" etc.\nThe fix in will only report a hint when there is already a happened. For example this code: before this fix the error is: with the fix, since there is already a error, the lint will try to find out the original macro, and suggest a borrow: The suggestion is not 100% right, since if we change it to , we need one more extra fix now(the return type is not expecting a reference), but there is a different issue, at least we have already moved one step on. If the original code is: Then suggest a borrow is a great and correct hint.\nSimilar in the scenario of function argument: original error is: new errors from the fix:\nWould not also be a good hint? The suggestion wouldn't compile anyway right? Because would need to change to take ? Without involved, the compiler does indeed suggest that: Once that's addressed (whichever way it's done), once again becomes transparent, and eg. or works. does not, however, and I see why a \"consider borrowing\" hint would be good there. However, if you make the change first and leave as-is, the resulting error does not include the suggestion to change , which is unhelpful. (I agree that the message is even less helpful though, I'm not disputing that.)\nYes, it's a good hint, but from the implementation view of a hint, it's much harder to analyze the following code from the error point and then construct a new expression. Even it seems an easy thing for humans. You are right, the suggestion is not 100% right, we may need an extra follow-up fix after applying the hint, which is also a common thing in rustc diagnostics.", "positive_passages": [{"docid": "doc-en-rust-2128e9fd06f41f49832d4e566037b422794f3aa8ff576207120a9095511b4968", "text": "} } // for dbg!(x) which may take ownership, suggest dbg!(&x) instead // but here we actually do not check whether the macro name is `dbg!` // so that we may extend the scope a bit larger to cover more cases fn suggest_ref_for_dbg_args( &self, body: &hir::Expr<'_>, place: &Place<'tcx>, move_span: Span, err: &mut Diag<'infcx>, ) { let var_info = self.body.var_debug_info.iter().find(|info| match info.value { VarDebugInfoContents::Place(ref p) => p == place, _ => false, }); let arg_name = if let Some(var_info) = var_info { var_info.name } else { return; }; struct MatchArgFinder { expr_span: Span, match_arg_span: Option, arg_name: Symbol, } impl Visitor<'_> for MatchArgFinder { fn visit_expr(&mut self, e: &hir::Expr<'_>) { // dbg! is expanded into a match pattern, we need to find the right argument span if let hir::ExprKind::Match(expr, ..) = &e.kind && let hir::ExprKind::Path(hir::QPath::Resolved( _, path @ Path { segments: [seg], .. }, )) = &expr.kind && seg.ident.name == self.arg_name && self.expr_span.source_callsite().contains(expr.span) { self.match_arg_span = Some(path.span); } hir::intravisit::walk_expr(self, e); } } let mut finder = MatchArgFinder { expr_span: move_span, match_arg_span: None, arg_name }; finder.visit_expr(body); if let Some(macro_arg_span) = finder.match_arg_span { err.span_suggestion_verbose( macro_arg_span.shrink_to_lo(), \"consider borrowing instead of transferring ownership\", \"&\", Applicability::MachineApplicable, ); } } fn report_use_of_uninitialized( &self, mpi: MovePathIndex,", "commid": "rust_pr_120990"}], "negative_passages": []}
{"query_id": "q-en-rust-dd66f626e30f9a422ef543deab970d3dbd7b6175d2cb3eb638f16edd4a301a95", "query": "No response No response No response\nI thought the intent with was to use it in a passthrough style, so you can insert it into existing expressions ie. becomes . In which case, maybe the lint should be for not using the value resulting from evaluating .\nNot sure about the majority of users but for me is a convenience over . I use it without using the return value almost all of the time.\nProbably a common thing to do, sure, but if we're talking about adding a lint, why not nudge the user in the direction of the intended and documented usage? You could at least do both: \"insert into an existing expression or pass it a borrow\" etc.\nThe fix in will only report a hint when there is already a happened. For example this code: before this fix the error is: with the fix, since there is already a error, the lint will try to find out the original macro, and suggest a borrow: The suggestion is not 100% right, since if we change it to , we need one more extra fix now(the return type is not expecting a reference), but there is a different issue, at least we have already moved one step on. If the original code is: Then suggest a borrow is a great and correct hint.\nSimilar in the scenario of function argument: original error is: new errors from the fix:\nWould not also be a good hint? The suggestion wouldn't compile anyway right? Because would need to change to take ? Without involved, the compiler does indeed suggest that: Once that's addressed (whichever way it's done), once again becomes transparent, and eg. or works. does not, however, and I see why a \"consider borrowing\" hint would be good there. However, if you make the change first and leave as-is, the resulting error does not include the suggestion to change , which is unhelpful. (I agree that the message is even less helpful though, I'm not disputing that.)\nYes, it's a good hint, but from the implementation view of a hint, it's much harder to analyze the following code from the error point and then construct a new expression. Even it seems an easy thing for humans. You are right, the suggestion is not 100% right, we may need an extra follow-up fix after applying the hint, which is also a common thing in rustc diagnostics.", "positive_passages": [{"docid": "doc-en-rust-829ef606971383bd0beae6327fad90a3896165fda6edcecc2d2448e74bf86f4c", "text": " fn s() -> String { let a = String::new(); dbg!(a); return a; //~ ERROR use of moved value: } fn m() -> String { let a = String::new(); dbg!(1, 2, a, 1, 2); return a; //~ ERROR use of moved value: } fn t(a: String) -> String { let b: String = \"\".to_string(); dbg!(a, b); return b; //~ ERROR use of moved value: } fn x(a: String) -> String { let b: String = \"\".to_string(); dbg!(a, b); return a; //~ ERROR use of moved value: } macro_rules! my_dbg { () => { eprintln!(\"[{}:{}:{}]\", file!(), line!(), column!()) }; ($val:expr $(,)?) => { match $val { tmp => { eprintln!(\"[{}:{}:{}] {} = {:#?}\", file!(), line!(), column!(), stringify!($val), &tmp); tmp } } }; ($($val:expr),+ $(,)?) => { ($(my_dbg!($val)),+,) }; } fn test_my_dbg() -> String { let b: String = \"\".to_string(); my_dbg!(b, 1); return b; //~ ERROR use of moved value: } fn test_not_macro() -> String { let a = String::new(); let _b = match a { tmp => { eprintln!(\"dbg: {}\", tmp); tmp } }; return a; //~ ERROR use of moved value: } fn get_expr(_s: String) {} fn test() { let a: String = \"\".to_string(); let _res = get_expr(dbg!(a)); let _l = a.len(); //~ ERROR borrow of moved value } fn main() {} ", "commid": "rust_pr_120990"}], "negative_passages": []}
{"query_id": "q-en-rust-dd66f626e30f9a422ef543deab970d3dbd7b6175d2cb3eb638f16edd4a301a95", "query": "No response No response No response\nI thought the intent with was to use it in a passthrough style, so you can insert it into existing expressions ie. becomes . In which case, maybe the lint should be for not using the value resulting from evaluating .\nNot sure about the majority of users but for me is a convenience over . I use it without using the return value almost all of the time.\nProbably a common thing to do, sure, but if we're talking about adding a lint, why not nudge the user in the direction of the intended and documented usage? You could at least do both: \"insert into an existing expression or pass it a borrow\" etc.\nThe fix in will only report a hint when there is already a happened. For example this code: before this fix the error is: with the fix, since there is already a error, the lint will try to find out the original macro, and suggest a borrow: The suggestion is not 100% right, since if we change it to , we need one more extra fix now(the return type is not expecting a reference), but there is a different issue, at least we have already moved one step on. If the original code is: Then suggest a borrow is a great and correct hint.\nSimilar in the scenario of function argument: original error is: new errors from the fix:\nWould not also be a good hint? The suggestion wouldn't compile anyway right? Because would need to change to take ? Without involved, the compiler does indeed suggest that: Once that's addressed (whichever way it's done), once again becomes transparent, and eg. or works. does not, however, and I see why a \"consider borrowing\" hint would be good there. However, if you make the change first and leave as-is, the resulting error does not include the suggestion to change , which is unhelpful. (I agree that the message is even less helpful though, I'm not disputing that.)\nYes, it's a good hint, but from the implementation view of a hint, it's much harder to analyze the following code from the error point and then construct a new expression. Even it seems an easy thing for humans. You are right, the suggestion is not 100% right, we may need an extra follow-up fix after applying the hint, which is also a common thing in rustc diagnostics.", "positive_passages": [{"docid": "doc-en-rust-505d050da985bd66ee4158271f64ac7c98fc819265ee9fa355078232a1333a64", "text": " error[E0382]: use of moved value: `a` --> $DIR/dbg-issue-120327.rs:4:12 | LL | let a = String::new(); | - move occurs because `a` has type `String`, which does not implement the `Copy` trait LL | dbg!(a); | ------- value moved here LL | return a; | ^ value used here after move | help: consider borrowing instead of transferring ownership | LL | dbg!(&a); | + error[E0382]: use of moved value: `a` --> $DIR/dbg-issue-120327.rs:10:12 | LL | let a = String::new(); | - move occurs because `a` has type `String`, which does not implement the `Copy` trait LL | dbg!(1, 2, a, 1, 2); | ------------------- value moved here LL | return a; | ^ value used here after move | help: consider borrowing instead of transferring ownership | LL | dbg!(1, 2, &a, 1, 2); | + error[E0382]: use of moved value: `b` --> $DIR/dbg-issue-120327.rs:16:12 | LL | let b: String = \"\".to_string(); | - move occurs because `b` has type `String`, which does not implement the `Copy` trait LL | dbg!(a, b); | ---------- value moved here LL | return b; | ^ value used here after move | help: consider borrowing instead of transferring ownership | LL | dbg!(a, &b); | + error[E0382]: use of moved value: `a` --> $DIR/dbg-issue-120327.rs:22:12 | LL | fn x(a: String) -> String { | - move occurs because `a` has type `String`, which does not implement the `Copy` trait LL | let b: String = \"\".to_string(); LL | dbg!(a, b); | ---------- value moved here LL | return a; | ^ value used here after move | help: consider borrowing instead of transferring ownership | LL | dbg!(&a, b); | + error[E0382]: use of moved value: `b` --> $DIR/dbg-issue-120327.rs:46:12 | LL | tmp => { | --- value moved here ... LL | let b: String = \"\".to_string(); | - move occurs because `b` has type `String`, which does not implement the `Copy` trait LL | my_dbg!(b, 1); LL | return b; | ^ value used here after move | help: consider borrowing instead of transferring ownership | LL | my_dbg!(&b, 1); | + help: borrow this binding in the pattern to avoid moving the value | LL | ref tmp => { | +++ error[E0382]: use of moved value: `a` --> $DIR/dbg-issue-120327.rs:57:12 | LL | let a = String::new(); | - move occurs because `a` has type `String`, which does not implement the `Copy` trait LL | let _b = match a { LL | tmp => { | --- value moved here ... LL | return a; | ^ value used here after move | help: borrow this binding in the pattern to avoid moving the value | LL | ref tmp => { | +++ error[E0382]: borrow of moved value: `a` --> $DIR/dbg-issue-120327.rs:65:14 | LL | let a: String = \"\".to_string(); | - move occurs because `a` has type `String`, which does not implement the `Copy` trait LL | let _res = get_expr(dbg!(a)); | ------- value moved here LL | let _l = a.len(); | ^ value borrowed here after move | help: consider borrowing instead of transferring ownership | LL | let _res = get_expr(dbg!(&a)); | + error: aborting due to 7 previous errors For more information about this error, try `rustc --explain E0382`. ", "commid": "rust_pr_120990"}], "negative_passages": []}
{"query_id": "q-en-rust-dd66f626e30f9a422ef543deab970d3dbd7b6175d2cb3eb638f16edd4a301a95", "query": "No response No response No response\nI thought the intent with was to use it in a passthrough style, so you can insert it into existing expressions ie. becomes . In which case, maybe the lint should be for not using the value resulting from evaluating .\nNot sure about the majority of users but for me is a convenience over . I use it without using the return value almost all of the time.\nProbably a common thing to do, sure, but if we're talking about adding a lint, why not nudge the user in the direction of the intended and documented usage? You could at least do both: \"insert into an existing expression or pass it a borrow\" etc.\nThe fix in will only report a hint when there is already a happened. For example this code: before this fix the error is: with the fix, since there is already a error, the lint will try to find out the original macro, and suggest a borrow: The suggestion is not 100% right, since if we change it to , we need one more extra fix now(the return type is not expecting a reference), but there is a different issue, at least we have already moved one step on. If the original code is: Then suggest a borrow is a great and correct hint.\nSimilar in the scenario of function argument: original error is: new errors from the fix:\nWould not also be a good hint? The suggestion wouldn't compile anyway right? Because would need to change to take ? Without involved, the compiler does indeed suggest that: Once that's addressed (whichever way it's done), once again becomes transparent, and eg. or works. does not, however, and I see why a \"consider borrowing\" hint would be good there. However, if you make the change first and leave as-is, the resulting error does not include the suggestion to change , which is unhelpful. (I agree that the message is even less helpful though, I'm not disputing that.)\nYes, it's a good hint, but from the implementation view of a hint, it's much harder to analyze the following code from the error point and then construct a new expression. Even it seems an easy thing for humans. You are right, the suggestion is not 100% right, we may need an extra follow-up fix after applying the hint, which is also a common thing in rustc diagnostics.", "positive_passages": [{"docid": "doc-en-rust-054012863feed619bb0fee6ee44e9da98f182f51d6ada3d677a535788e56b7b4", "text": "| ^^^^^^^ value used here after move | = note: this error originates in the macro `dbg` (in Nightly builds, run with -Z macro-backtrace for more info) help: consider borrowing instead of transferring ownership | LL | let _ = dbg!(&a); | + error: aborting due to 1 previous error", "commid": "rust_pr_120990"}], "negative_passages": []}
{"query_id": "q-en-rust-8a55363f066ba4d7b354eec6162bb20ccc5d3db1122c0d96b8cab2ba3b99609b", "query": " $DIR/cfg-value-for-cfg-name-duplicate.rs:8:7 | LL | #[cfg(value)] | ^^^^^ | = help: expected names are: `bar`, `bee`, `cow`, `debug_assertions`, `doc`, `doctest`, `foo`, `miri`, `overflow_checks`, `panic`, `proc_macro`, `relocation_model`, `sanitize`, `sanitizer_cfi_generalize_pointers`, `sanitizer_cfi_normalize_integers`, `target_abi`, `target_arch`, `target_endian`, `target_env`, `target_family`, `target_feature`, `target_has_atomic`, `target_has_atomic_equal_alignment`, `target_has_atomic_load_store`, `target_os`, `target_pointer_width`, `target_thread_local`, `target_vendor`, `test`, `unix`, `windows` = help: to expect this configuration use `--check-cfg=cfg(value)` = note: see | | ^^^^^^^^^ cannot `break` inside of an `async` block | | ^^^^^^^^^ cannot `break` inside `async` block LL | | }; | |_____- enclosing `async` block error[E0267]: `break` inside of an `async` block error[E0267]: `break` inside `async` block --> $DIR/async-block-control-flow-static-semantics.rs:39:13 | LL | / async { LL | | break 0u8; | | ^^^^^^^^^ cannot `break` inside of an `async` block | | ^^^^^^^^^ cannot `break` inside `async` block LL | | }; | |_________- enclosing `async` block", "commid": "rust_pr_124777"}], "negative_passages": []}
{"query_id": "q-en-rust-58479c4424c9c0ef84197cc5803c3e17ebacd16a312c2d57e3eae1f751f0ab0c", "query": "Same issue as which was for blocks. $DIR/break-inside-coroutine-issue-124495.rs:8:5 | LL | async fn async_fn() { | _____________________- LL | | break; | | ^^^^^ cannot `break` inside `async` function LL | | } | |_- enclosing `async` function error[E0267]: `break` inside `gen` function --> $DIR/break-inside-coroutine-issue-124495.rs:12:5 | LL | gen fn gen_fn() { | _________________- LL | | break; | | ^^^^^ cannot `break` inside `gen` function LL | | } | |_- enclosing `gen` function error[E0267]: `break` inside `async gen` function --> $DIR/break-inside-coroutine-issue-124495.rs:16:5 | LL | async gen fn async_gen_fn() { | _____________________________- LL | | break; | | ^^^^^ cannot `break` inside `async gen` function LL | | } | |_- enclosing `async gen` function error[E0267]: `break` inside `async` block --> $DIR/break-inside-coroutine-issue-124495.rs:20:21 | LL | let _ = async { break; }; | --------^^^^^--- | | | | | cannot `break` inside `async` block | enclosing `async` block error[E0267]: `break` inside `async` closure --> $DIR/break-inside-coroutine-issue-124495.rs:21:24 | LL | let _ = async || { break; }; | --^^^^^--- | | | | | cannot `break` inside `async` closure | enclosing `async` closure error[E0267]: `break` inside `gen` block --> $DIR/break-inside-coroutine-issue-124495.rs:23:19 | LL | let _ = gen { break; }; | ------^^^^^--- | | | | | cannot `break` inside `gen` block | enclosing `gen` block error[E0267]: `break` inside `async gen` block --> $DIR/break-inside-coroutine-issue-124495.rs:25:25 | LL | let _ = async gen { break; }; | ------------^^^^^--- | | | | | cannot `break` inside `async gen` block | enclosing `async gen` block error: aborting due to 7 previous errors For more information about this error, try `rustc --explain E0267`. ", "commid": "rust_pr_124777"}], "negative_passages": []}
{"query_id": "q-en-rust-16acbb4914bb9992ed29a1710c87156554dd8685769f60cdee2f321edfbae7e4", "query": "The idea of changing a field to unit type to preserve field numbering makes sense for fields in the middle of a tuple. However, if the unused field is at the end, or it's the only field, then deleting it won't affect field numbering of any other field. No response No response $DIR/tuple-struct-field.rs:8:26 error: fields `1`, `2`, `3`, and `4` are never read --> $DIR/tuple-struct-field.rs:8:28 | LL | struct SingleUnused(i32, [u8; LEN], String); | ------------ ^^^^^^^^^ LL | struct UnusedAtTheEnd(i32, f32, [u8; LEN], String, u8); | -------------- ^^^ ^^^^^^^^^ ^^^^^^ ^^ | | | field in this struct | fields in this struct | = help: consider removing these fields note: the lint level is defined here --> $DIR/tuple-struct-field.rs:1:9 | LL | #![deny(dead_code)] | ^^^^^^^^^ help: consider changing the field to be of unit type to suppress this warning while preserving the field numbering, or remove the field error: field `0` is never read --> $DIR/tuple-struct-field.rs:13:27 | LL | struct UnusedJustOneField(i32); | ------------------ ^^^ | | | field in this struct | LL | struct SingleUnused(i32, (), String); | ~~ = help: consider removing this field error: fields `0`, `1`, `2`, and `3` are never read --> $DIR/tuple-struct-field.rs:13:23 error: fields `1`, `2`, and `4` are never read --> $DIR/tuple-struct-field.rs:18:31 | LL | struct MultipleUnused(i32, f32, String, u8); | -------------- ^^^ ^^^ ^^^^^^ ^^ LL | struct UnusedInTheMiddle(i32, f32, String, u8, u32); | ----------------- ^^^ ^^^^^^ ^^^ | | | fields in this struct | help: consider changing the fields to be of unit type to suppress this warning while preserving the field numbering, or remove the fields | LL | struct MultipleUnused((), (), (), ()); | ~~ ~~ ~~ ~~ LL | struct UnusedInTheMiddle(i32, (), (), u8, ()); | ~~ ~~ ~~ error: aborting due to 2 previous errors error: aborting due to 3 previous errors ", "commid": "rust_pr_124580"}], "negative_passages": []}
{"query_id": "q-en-rust-ea6c2f742466aca108571f2c909618d07c310be9bf346fc494cfcf755c7dcffc", "query": ": target_os = \"illumos\", target_os = \"redox\", target_os = \"solaris\", target_os = \"espidf\", target_os = \"horizon\", target_os = \"vita\",", "commid": "rust_pr_124798"}], "negative_passages": []}
{"query_id": "q-en-rust-571da560101451a8f9d137c6e3ba9f2fe26cf3e8ab89b5854079b9fabdbdb6b5", "query": "This crashes with every edition except 2015 :thinking: $DIR/cycle-import-in-std-1.rs:5:11 | LL | use ops::{self as std}; | ^^^^^^^^^^^ no external crate `ops` | = help: consider importing one of these items instead: core::ops std::ops error: aborting due to 1 previous error For more information about this error, try `rustc --explain E0432`. ", "commid": "rust_pr_126065"}], "negative_passages": []}
{"query_id": "q-en-rust-571da560101451a8f9d137c6e3ba9f2fe26cf3e8ab89b5854079b9fabdbdb6b5", "query": "This crashes with every edition except 2015 :thinking: $DIR/cycle-import-in-std-2.rs:5:11 | LL | use ops::{self as std}; | ^^^^^^^^^^^ no external crate `ops` | = help: consider importing one of these items instead: core::ops std::ops error: aborting due to 1 previous error For more information about this error, try `rustc --explain E0432`. ", "commid": "rust_pr_126065"}], "negative_passages": []}
{"query_id": "q-en-rust-50635d701f713a0327f8a17f571efab941578631e301b92ccd34a211e21f479d", "query": " $DIR/skipped-ref-pats-issue-125058.rs:8:8 | LL | struct Foo; | ^^^ | = note: `#[warn(dead_code)]` on by default warning: unused closure that must be used --> $DIR/skipped-ref-pats-issue-125058.rs:12:5 | LL | / || { LL | | LL | | if let Some(Some(&mut x)) = &mut Some(&mut Some(0)) { LL | | let _: u32 = x; LL | | } LL | | }; | |_____^ | = note: closures are lazy and do nothing unless called = note: `#[warn(unused_must_use)]` on by default warning: 2 warnings emitted ", "commid": "rust_pr_125084"}], "negative_passages": []}
{"query_id": "q-en-rust-5f1ff9fed6de345ede55017963c0b6d14798edc4f39e85a8a1ffec4b4df5b5dc", "query": "This is a recent regression, starting with when is set, in my case to where I have prepared a locally built sysroot (that's ), then rustc now assumes it can find the linker in the sysroot, which does not work. See for some more details. It is new to me that the sysroot is supposed to contain the linker as well, is that expected behavior? Also the error message in that case is really unhelpful.^^ Maybe rustc should check that the path it sets for actually exists before starting ?\nYes. The blog post has a summary of that PR\nThat blog post doesn't mention the term \"sysroot\" so it doesn't help with my issue.\nFWIW what I would have expected is that rustc searches for the linker binary somewhere around . Isn't that how we find some other things as well that are usually installed with rustc by rustup?\nHow do you handle windows-gnu? Imo this also uses a self-contained linker by default.\nNo idea. CI passes on Windows, whatever github gives us when we say .\nYeah the rust-lld linker has been in the sysroot since , is used by default in e.g. some of the wasm and aarch64 targets, and rustc had flags to opt-into using it since that PR. What is new is that it is now used by nightlies we distribute. bootstrap puts it in the sysroot when is true (and it's now the default on x64 linux under certain conditions), so you can also set that when -ing if you want it. It works with or without download-ci-llvm.\nWe resolve the sysroot from rustc driver, not current executable.\nYes, the surprise is just that the linker is searched in the sysroot and thus affected by --sysroot, rather than being searched directly. But I guess this is working as intended, I just need to add some more stuff to these custom sysroots then.\nThis broke cargo's CI since the build-std tests try to verify that nothing in the sysroot is required. Would it be possible to change it so that it will only require rust-lld if it exists in the sysroot (similar to windows-gnu and other targets)?\nI don't know what we do on windows-gnu but we can do something similar, yes. I'll open a PR.\nFWIW I managed to add tests for windows-gnu and they work like a charm. So no copying of files seems to be required there. Though I do recall xargo copying some things for windows-gnu so maybe I am not testing enough... but my tests do involve building a small program against the sysroot and running it, so maybe those files are just not needed any more.\nWe actually hit a binutils linker bug last week on windows-gnu (not windows-gnullvm), and the linker flavor there is gnu-cc, so do these targets really use rust-lld by default?\nHow does this work with cross-compilation -- I assume we are always taking the host sysroot even when building for another target? To me that seems like a sign that this shouldn't be part of the sysroot, it should be part of the rustc distribution. The sysroot is for target-related things.\nIn particular, I would expect that if I set to a folder that only contains a sysroot for some foreign target, then that should work if I also set appropriately (and use the right linker and everything). But currently that will then likely fail as it tries to search the host target in the folder, and with it will instead silently fall back to the slow linker.\nThey do use a self-contained linker from sysroot by default, but not rust-lld. We ship from binutils for those targets.\nThanks for the clarification. That explains why there's no existence check (and your 2nd hack for older GCCs in that other issue :). Locally, it was just using mingw's ld probably because of $PATH differences.\nLinkers are generally target-dependent, so it's pretty reasonable to ship them in target sysroot, I guess. rust-lld is just applicable to a wide range of targets by being a cross-linker. Also there's no target vs host separation for sysroot in rustc right now, there's just one sysroot. Maybe there's some space for improvements here, it's just not many people use to notice any issues.\nI still hope to migrate both rust-lld and mingw linker to the common directory layout scheme based on .\nrust-lld is just applicable to a wide range of targets by being a cross-linker. If I'm on macOS and want to build for a Linux target, then the binary shipped in the sysroot is going to be completely useless to me. So in that sense shipping them in the target sysroot seems pointless. (Of course our cross-compile story is not great so such cross-building will hardly ever work, but it demonstrates the fundamental issue I was pointing out.) Even when I am on and building for I probably want to use a 64bit program for the linking, and not a binary shipped in the i686 sysroot -- this is e.g. how Firefox is (or was) built as 4GB of RAM (the maximum accessible to 32bit programs) are just not enough for linking. Given that these are binaries that need to be run on the host (i.e. the machine where the build happens), IMO the only sensible location is together with the other host stuff, i.e., where lives.\nAh I think I understand. That's right, and on macOS we wouldn't look for in the \"x8664-unknown-linux-gnu sysroot\" . You can try this locally on a helloworld (look for the path to ), from with a target:\nOh, so we always assume that contains a sysroot for the host and the current target. That's news to me as well (and Miri certainly doesn't guarantee it). But the fallback in your PR should solve that as well, I think.", "positive_passages": [{"docid": "doc-en-rust-077dfc1161c651344e1d269055e62dcf2731521cfa7ea2d4db3a177ba91dc09f", "text": "codegen_ssa_select_cpp_build_tool_workload = in the Visual Studio installer, ensure the \"C++ build tools\" workload is selected codegen_ssa_self_contained_linker_missing = the self-contained linker was requested, but it wasn't found in the target's sysroot, or in rustc's sysroot codegen_ssa_shuffle_indices_evaluation = could not evaluate shuffle_indices at compile time codegen_ssa_specify_libraries_to_link = use the `-l` flag to specify native libraries to link", "commid": "rust_pr_125263"}], "negative_passages": []}
{"query_id": "q-en-rust-5f1ff9fed6de345ede55017963c0b6d14798edc4f39e85a8a1ffec4b4df5b5dc", "query": "This is a recent regression, starting with when is set, in my case to where I have prepared a locally built sysroot (that's ), then rustc now assumes it can find the linker in the sysroot, which does not work. See for some more details. It is new to me that the sysroot is supposed to contain the linker as well, is that expected behavior? Also the error message in that case is really unhelpful.^^ Maybe rustc should check that the path it sets for actually exists before starting ?\nYes. The blog post has a summary of that PR\nThat blog post doesn't mention the term \"sysroot\" so it doesn't help with my issue.\nFWIW what I would have expected is that rustc searches for the linker binary somewhere around . Isn't that how we find some other things as well that are usually installed with rustc by rustup?\nHow do you handle windows-gnu? Imo this also uses a self-contained linker by default.\nNo idea. CI passes on Windows, whatever github gives us when we say .\nYeah the rust-lld linker has been in the sysroot since , is used by default in e.g. some of the wasm and aarch64 targets, and rustc had flags to opt-into using it since that PR. What is new is that it is now used by nightlies we distribute. bootstrap puts it in the sysroot when is true (and it's now the default on x64 linux under certain conditions), so you can also set that when -ing if you want it. It works with or without download-ci-llvm.\nWe resolve the sysroot from rustc driver, not current executable.\nYes, the surprise is just that the linker is searched in the sysroot and thus affected by --sysroot, rather than being searched directly. But I guess this is working as intended, I just need to add some more stuff to these custom sysroots then.\nThis broke cargo's CI since the build-std tests try to verify that nothing in the sysroot is required. Would it be possible to change it so that it will only require rust-lld if it exists in the sysroot (similar to windows-gnu and other targets)?\nI don't know what we do on windows-gnu but we can do something similar, yes. I'll open a PR.\nFWIW I managed to add tests for windows-gnu and they work like a charm. So no copying of files seems to be required there. Though I do recall xargo copying some things for windows-gnu so maybe I am not testing enough... but my tests do involve building a small program against the sysroot and running it, so maybe those files are just not needed any more.\nWe actually hit a binutils linker bug last week on windows-gnu (not windows-gnullvm), and the linker flavor there is gnu-cc, so do these targets really use rust-lld by default?\nHow does this work with cross-compilation -- I assume we are always taking the host sysroot even when building for another target? To me that seems like a sign that this shouldn't be part of the sysroot, it should be part of the rustc distribution. The sysroot is for target-related things.\nIn particular, I would expect that if I set to a folder that only contains a sysroot for some foreign target, then that should work if I also set appropriately (and use the right linker and everything). But currently that will then likely fail as it tries to search the host target in the folder, and with it will instead silently fall back to the slow linker.\nThey do use a self-contained linker from sysroot by default, but not rust-lld. We ship from binutils for those targets.\nThanks for the clarification. That explains why there's no existence check (and your 2nd hack for older GCCs in that other issue :). Locally, it was just using mingw's ld probably because of $PATH differences.\nLinkers are generally target-dependent, so it's pretty reasonable to ship them in target sysroot, I guess. rust-lld is just applicable to a wide range of targets by being a cross-linker. Also there's no target vs host separation for sysroot in rustc right now, there's just one sysroot. Maybe there's some space for improvements here, it's just not many people use to notice any issues.\nI still hope to migrate both rust-lld and mingw linker to the common directory layout scheme based on .\nrust-lld is just applicable to a wide range of targets by being a cross-linker. If I'm on macOS and want to build for a Linux target, then the binary shipped in the sysroot is going to be completely useless to me. So in that sense shipping them in the target sysroot seems pointless. (Of course our cross-compile story is not great so such cross-building will hardly ever work, but it demonstrates the fundamental issue I was pointing out.) Even when I am on and building for I probably want to use a 64bit program for the linking, and not a binary shipped in the i686 sysroot -- this is e.g. how Firefox is (or was) built as 4GB of RAM (the maximum accessible to 32bit programs) are just not enough for linking. Given that these are binaries that need to be run on the host (i.e. the machine where the build happens), IMO the only sensible location is together with the other host stuff, i.e., where lives.\nAh I think I understand. That's right, and on macOS we wouldn't look for in the \"x8664-unknown-linux-gnu sysroot\" . You can try this locally on a helloworld (look for the path to ), from with a target:\nOh, so we always assume that contains a sysroot for the host and the current target. That's news to me as well (and Miri certainly doesn't guarantee it). But the fallback in your PR should solve that as well, I think.", "positive_passages": [{"docid": "doc-en-rust-7452057a42604899636274d10c67d7116af1c40e3c5245383d83c7be48316ca9", "text": "let self_contained_linker = self_contained_cli || self_contained_target; if self_contained_linker && !sess.opts.cg.link_self_contained.is_linker_disabled() { let mut linker_path_exists = false; for path in sess.get_tools_search_paths(false) { let linker_path = path.join(\"gcc-ld\"); linker_path_exists |= linker_path.exists(); cmd.arg({ let mut arg = OsString::from(\"-B\"); arg.push(path.join(\"gcc-ld\")); arg.push(linker_path); arg }); } if !linker_path_exists { // As a sanity check, we emit an error if none of these paths exist: we want // self-contained linking and have no linker. sess.dcx().emit_fatal(errors::SelfContainedLinkerMissing); } } // 2. Implement the \"linker flavor\" part of this feature by asking `cc` to use some kind of", "commid": "rust_pr_125263"}], "negative_passages": []}
{"query_id": "q-en-rust-5f1ff9fed6de345ede55017963c0b6d14798edc4f39e85a8a1ffec4b4df5b5dc", "query": "This is a recent regression, starting with when is set, in my case to where I have prepared a locally built sysroot (that's ), then rustc now assumes it can find the linker in the sysroot, which does not work. See for some more details. It is new to me that the sysroot is supposed to contain the linker as well, is that expected behavior? Also the error message in that case is really unhelpful.^^ Maybe rustc should check that the path it sets for actually exists before starting ?\nYes. The blog post has a summary of that PR\nThat blog post doesn't mention the term \"sysroot\" so it doesn't help with my issue.\nFWIW what I would have expected is that rustc searches for the linker binary somewhere around . Isn't that how we find some other things as well that are usually installed with rustc by rustup?\nHow do you handle windows-gnu? Imo this also uses a self-contained linker by default.\nNo idea. CI passes on Windows, whatever github gives us when we say .\nYeah the rust-lld linker has been in the sysroot since , is used by default in e.g. some of the wasm and aarch64 targets, and rustc had flags to opt-into using it since that PR. What is new is that it is now used by nightlies we distribute. bootstrap puts it in the sysroot when is true (and it's now the default on x64 linux under certain conditions), so you can also set that when -ing if you want it. It works with or without download-ci-llvm.\nWe resolve the sysroot from rustc driver, not current executable.\nYes, the surprise is just that the linker is searched in the sysroot and thus affected by --sysroot, rather than being searched directly. But I guess this is working as intended, I just need to add some more stuff to these custom sysroots then.\nThis broke cargo's CI since the build-std tests try to verify that nothing in the sysroot is required. Would it be possible to change it so that it will only require rust-lld if it exists in the sysroot (similar to windows-gnu and other targets)?\nI don't know what we do on windows-gnu but we can do something similar, yes. I'll open a PR.\nFWIW I managed to add tests for windows-gnu and they work like a charm. So no copying of files seems to be required there. Though I do recall xargo copying some things for windows-gnu so maybe I am not testing enough... but my tests do involve building a small program against the sysroot and running it, so maybe those files are just not needed any more.\nWe actually hit a binutils linker bug last week on windows-gnu (not windows-gnullvm), and the linker flavor there is gnu-cc, so do these targets really use rust-lld by default?\nHow does this work with cross-compilation -- I assume we are always taking the host sysroot even when building for another target? To me that seems like a sign that this shouldn't be part of the sysroot, it should be part of the rustc distribution. The sysroot is for target-related things.\nIn particular, I would expect that if I set to a folder that only contains a sysroot for some foreign target, then that should work if I also set appropriately (and use the right linker and everything). But currently that will then likely fail as it tries to search the host target in the folder, and with it will instead silently fall back to the slow linker.\nThey do use a self-contained linker from sysroot by default, but not rust-lld. We ship from binutils for those targets.\nThanks for the clarification. That explains why there's no existence check (and your 2nd hack for older GCCs in that other issue :). Locally, it was just using mingw's ld probably because of $PATH differences.\nLinkers are generally target-dependent, so it's pretty reasonable to ship them in target sysroot, I guess. rust-lld is just applicable to a wide range of targets by being a cross-linker. Also there's no target vs host separation for sysroot in rustc right now, there's just one sysroot. Maybe there's some space for improvements here, it's just not many people use to notice any issues.\nI still hope to migrate both rust-lld and mingw linker to the common directory layout scheme based on .\nrust-lld is just applicable to a wide range of targets by being a cross-linker. If I'm on macOS and want to build for a Linux target, then the binary shipped in the sysroot is going to be completely useless to me. So in that sense shipping them in the target sysroot seems pointless. (Of course our cross-compile story is not great so such cross-building will hardly ever work, but it demonstrates the fundamental issue I was pointing out.) Even when I am on and building for I probably want to use a 64bit program for the linking, and not a binary shipped in the i686 sysroot -- this is e.g. how Firefox is (or was) built as 4GB of RAM (the maximum accessible to 32bit programs) are just not enough for linking. Given that these are binaries that need to be run on the host (i.e. the machine where the build happens), IMO the only sensible location is together with the other host stuff, i.e., where lives.\nAh I think I understand. That's right, and on macOS we wouldn't look for in the \"x8664-unknown-linux-gnu sysroot\" . You can try this locally on a helloworld (look for the path to ), from with a target:\nOh, so we always assume that contains a sysroot for the host and the current target. That's news to me as well (and Miri certainly doesn't guarantee it). But the fallback in your PR should solve that as well, I think.", "positive_passages": [{"docid": "doc-en-rust-b6e2a05bdcd0b4bb1deec57b0fc7371c017906dfb63146bb2609acbdbc8330b7", "text": "pub struct MsvcMissingLinker; #[derive(Diagnostic)] #[diag(codegen_ssa_self_contained_linker_missing)] pub struct SelfContainedLinkerMissing; #[derive(Diagnostic)] #[diag(codegen_ssa_check_installed_visual_studio)] pub struct CheckInstalledVisualStudio;", "commid": "rust_pr_125263"}], "negative_passages": []}
{"query_id": "q-en-rust-5f1ff9fed6de345ede55017963c0b6d14798edc4f39e85a8a1ffec4b4df5b5dc", "query": "This is a recent regression, starting with when is set, in my case to where I have prepared a locally built sysroot (that's ), then rustc now assumes it can find the linker in the sysroot, which does not work. See for some more details. It is new to me that the sysroot is supposed to contain the linker as well, is that expected behavior? Also the error message in that case is really unhelpful.^^ Maybe rustc should check that the path it sets for actually exists before starting ?\nYes. The blog post has a summary of that PR\nThat blog post doesn't mention the term \"sysroot\" so it doesn't help with my issue.\nFWIW what I would have expected is that rustc searches for the linker binary somewhere around . Isn't that how we find some other things as well that are usually installed with rustc by rustup?\nHow do you handle windows-gnu? Imo this also uses a self-contained linker by default.\nNo idea. CI passes on Windows, whatever github gives us when we say .\nYeah the rust-lld linker has been in the sysroot since , is used by default in e.g. some of the wasm and aarch64 targets, and rustc had flags to opt-into using it since that PR. What is new is that it is now used by nightlies we distribute. bootstrap puts it in the sysroot when is true (and it's now the default on x64 linux under certain conditions), so you can also set that when -ing if you want it. It works with or without download-ci-llvm.\nWe resolve the sysroot from rustc driver, not current executable.\nYes, the surprise is just that the linker is searched in the sysroot and thus affected by --sysroot, rather than being searched directly. But I guess this is working as intended, I just need to add some more stuff to these custom sysroots then.\nThis broke cargo's CI since the build-std tests try to verify that nothing in the sysroot is required. Would it be possible to change it so that it will only require rust-lld if it exists in the sysroot (similar to windows-gnu and other targets)?\nI don't know what we do on windows-gnu but we can do something similar, yes. I'll open a PR.\nFWIW I managed to add tests for windows-gnu and they work like a charm. So no copying of files seems to be required there. Though I do recall xargo copying some things for windows-gnu so maybe I am not testing enough... but my tests do involve building a small program against the sysroot and running it, so maybe those files are just not needed any more.\nWe actually hit a binutils linker bug last week on windows-gnu (not windows-gnullvm), and the linker flavor there is gnu-cc, so do these targets really use rust-lld by default?\nHow does this work with cross-compilation -- I assume we are always taking the host sysroot even when building for another target? To me that seems like a sign that this shouldn't be part of the sysroot, it should be part of the rustc distribution. The sysroot is for target-related things.\nIn particular, I would expect that if I set to a folder that only contains a sysroot for some foreign target, then that should work if I also set appropriately (and use the right linker and everything). But currently that will then likely fail as it tries to search the host target in the folder, and with it will instead silently fall back to the slow linker.\nThey do use a self-contained linker from sysroot by default, but not rust-lld. We ship from binutils for those targets.\nThanks for the clarification. That explains why there's no existence check (and your 2nd hack for older GCCs in that other issue :). Locally, it was just using mingw's ld probably because of $PATH differences.\nLinkers are generally target-dependent, so it's pretty reasonable to ship them in target sysroot, I guess. rust-lld is just applicable to a wide range of targets by being a cross-linker. Also there's no target vs host separation for sysroot in rustc right now, there's just one sysroot. Maybe there's some space for improvements here, it's just not many people use to notice any issues.\nI still hope to migrate both rust-lld and mingw linker to the common directory layout scheme based on .\nrust-lld is just applicable to a wide range of targets by being a cross-linker. If I'm on macOS and want to build for a Linux target, then the binary shipped in the sysroot is going to be completely useless to me. So in that sense shipping them in the target sysroot seems pointless. (Of course our cross-compile story is not great so such cross-building will hardly ever work, but it demonstrates the fundamental issue I was pointing out.) Even when I am on and building for I probably want to use a 64bit program for the linking, and not a binary shipped in the i686 sysroot -- this is e.g. how Firefox is (or was) built as 4GB of RAM (the maximum accessible to 32bit programs) are just not enough for linking. Given that these are binaries that need to be run on the host (i.e. the machine where the build happens), IMO the only sensible location is together with the other host stuff, i.e., where lives.\nAh I think I understand. That's right, and on macOS we wouldn't look for in the \"x8664-unknown-linux-gnu sysroot\" . You can try this locally on a helloworld (look for the path to ), from with a target:\nOh, so we always assume that contains a sysroot for the host and the current target. That's news to me as well (and Miri certainly doesn't guarantee it). But the fallback in your PR should solve that as well, I think.", "positive_passages": [{"docid": "doc-en-rust-692794394728d193c742cf59c51d7757e5ab50a3f9c95a588a2b43910a463b08", "text": "PathBuf::from_iter([sysroot, Path::new(&rustlib_path), Path::new(\"lib\")]) } /// Returns a path to the target's `bin` folder within its `rustlib` path in the sysroot. This is /// where binaries are usually installed, e.g. the self-contained linkers, lld-wrappers, LLVM tools, /// etc. pub fn make_target_bin_path(sysroot: &Path, target_triple: &str) -> PathBuf { let rustlib_path = rustc_target::target_rustlib_path(sysroot, target_triple); PathBuf::from_iter([sysroot, Path::new(&rustlib_path), Path::new(\"bin\")]) } #[cfg(unix)] fn current_dll_path() -> Result /// Returns a list of directories where target-specific tool binaries are located. /// Returns a list of directories where target-specific tool binaries are located. Some fallback /// directories are also returned, for example if `--sysroot` is used but tools are missing /// (#125246): we also add the bin directories to the sysroot where rustc is located. pub fn get_tools_search_paths(&self, self_contained: bool) -> Vec let rustlib_path = rustc_target::target_rustlib_path(&self.sysroot, config::host_triple()); let p = PathBuf::from_iter([ Path::new(&self.sysroot), Path::new(&rustlib_path), Path::new(\"bin\"), ]); if self_contained { vec![p.clone(), p.join(\"self-contained\")] } else { vec![p] } let bin_path = filesearch::make_target_bin_path(&self.sysroot, config::host_triple()); let fallback_sysroot_paths = filesearch::sysroot_candidates() .into_iter() .map(|sysroot| filesearch::make_target_bin_path(&sysroot, config::host_triple())); let search_paths = std::iter::once(bin_path).chain(fallback_sysroot_paths); if self_contained { // The self-contained tools are expected to be e.g. in `bin/self-contained` in the // sysroot's `rustlib` path, so we add such a subfolder to the bin path, and the // fallback paths. search_paths.flat_map(|path| [path.clone(), path.join(\"self-contained\")]).collect() } else { search_paths.collect() } } pub fn init_incr_comp_session(&self, session_dir: PathBuf, lock_file: flock::Lock) {", "commid": "rust_pr_125263"}], "negative_passages": []}
{"query_id": "q-en-rust-df27dcf85bb7e3c2b632ee80f5aed56fb82be88c8d63d6956c90d681989f5e8d", "query": "For example, from (And many more.) I think this is due to Should we just drop these from ?\ncc", "positive_passages": [{"docid": "doc-en-rust-74f641873a99df58de71303c85d821ea2911ef32272e95c5920f9f3e1ce326cf", "text": "(\"avx512bw\", Unstable(sym::avx512_target_feature)), (\"avx512cd\", Unstable(sym::avx512_target_feature)), (\"avx512dq\", Unstable(sym::avx512_target_feature)), (\"avx512er\", Unstable(sym::avx512_target_feature)), (\"avx512f\", Unstable(sym::avx512_target_feature)), (\"avx512fp16\", Unstable(sym::avx512_target_feature)), (\"avx512ifma\", Unstable(sym::avx512_target_feature)), (\"avx512pf\", Unstable(sym::avx512_target_feature)), (\"avx512vbmi\", Unstable(sym::avx512_target_feature)), (\"avx512vbmi2\", Unstable(sym::avx512_target_feature)), (\"avx512vl\", Unstable(sym::avx512_target_feature)),", "commid": "rust_pr_125498"}], "negative_passages": []}
{"query_id": "q-en-rust-df27dcf85bb7e3c2b632ee80f5aed56fb82be88c8d63d6956c90d681989f5e8d", "query": "For example, from (And many more.) I think this is due to Should we just drop these from ?\ncc", "positive_passages": [{"docid": "doc-en-rust-ef862ee23e87868ec40c16e608a512cbc441fc0f8d3eb74eed3eb933aa216368", "text": "println!(\"avx512bw: {:?}\", is_x86_feature_detected!(\"avx512bw\")); println!(\"avx512cd: {:?}\", is_x86_feature_detected!(\"avx512cd\")); println!(\"avx512dq: {:?}\", is_x86_feature_detected!(\"avx512dq\")); println!(\"avx512er: {:?}\", is_x86_feature_detected!(\"avx512er\")); println!(\"avx512f: {:?}\", is_x86_feature_detected!(\"avx512f\")); println!(\"avx512ifma: {:?}\", is_x86_feature_detected!(\"avx512ifma\")); println!(\"avx512pf: {:?}\", is_x86_feature_detected!(\"avx512pf\")); println!(\"avx512vbmi2: {:?}\", is_x86_feature_detected!(\"avx512vbmi2\")); println!(\"avx512vbmi: {:?}\", is_x86_feature_detected!(\"avx512vbmi\")); println!(\"avx512vl: {:?}\", is_x86_feature_detected!(\"avx512vl\"));", "commid": "rust_pr_125498"}], "negative_passages": []}
{"query_id": "q-en-rust-df27dcf85bb7e3c2b632ee80f5aed56fb82be88c8d63d6956c90d681989f5e8d", "query": "For example, from (And many more.) I think this is due to Should we just drop these from ?\ncc", "positive_passages": [{"docid": "doc-en-rust-a1e9c41af4aac74ee90f5ed2f8df12b25b981a340697d5c43668b5d39627d668", "text": "LL | cfg!(target_feature = \"zebra\"); | ^^^^^^^^^^^^^^^^^^^^^^^^ | = note: expected values for `target_feature` are: `10e60`, `2e3`, `3e3r1`, `3e3r2`, `3e3r3`, `3e7`, `7e10`, `a`, `aclass`, `adx`, `aes`, `altivec`, `alu32`, `atomics`, `avx`, `avx2`, `avx512bf16`, `avx512bitalg`, `avx512bw`, `avx512cd`, `avx512dq`, `avx512er`, `avx512f`, `avx512fp16`, `avx512ifma`, `avx512pf`, `avx512vbmi`, `avx512vbmi2`, `avx512vl`, `avx512vnni`, `avx512vp2intersect`, `avx512vpopcntdq`, `bf16`, `bmi1`, and `bmi2` and 188 more = note: expected values for `target_feature` are: `10e60`, `2e3`, `3e3r1`, `3e3r2`, `3e3r3`, `3e7`, `7e10`, `a`, `aclass`, `adx`, `aes`, `altivec`, `alu32`, `atomics`, `avx`, `avx2`, `avx512bf16`, `avx512bitalg`, `avx512bw`, `avx512cd`, `avx512dq`, `avx512f`, `avx512fp16`, `avx512ifma`, `avx512vbmi`, `avx512vbmi2`, `avx512vl`, `avx512vnni`, `avx512vp2intersect`, `avx512vpopcntdq`, `bf16`, `bmi1`, `bmi2`, `bti`, and `bulk-memory` and 186 more = note: see = note: expected values for `target_feature` are: `10e60`, `2e3`, `3e3r1`, `3e3r2`, `3e3r3`, `3e7`, `7e10`, `a`, `aclass`, `adx`, `aes`, `altivec`, `alu32`, `atomics`, `avx`, `avx2`, `avx512bf16`, `avx512bitalg`, `avx512bw`, `avx512cd`, `avx512dq`, `avx512er`, `avx512f`, `avx512fp16`, `avx512ifma`, `avx512pf`, `avx512vbmi`, `avx512vbmi2`, `avx512vl`, `avx512vnni`, `avx512vp2intersect`, `avx512vpopcntdq`, `bf16`, `bmi1`, `bmi2`, `bti`, `bulk-memory`, `c`, `cache`, `cmpxchg16b`, `crc`, `crt-static`, `d`, `d32`, `dit`, `doloop`, `dotprod`, `dpb`, `dpb2`, `dsp`, `dsp1e2`, `dspe60`, `e`, `e1`, `e2`, `edsp`, `elrw`, `ermsb`, `exception-handling`, `extended-const`, `f`, `f16c`, `f32mm`, `f64mm`, `fcma`, `fdivdu`, `fhm`, `flagm`, `float1e2`, `float1e3`, `float3e4`, `float7e60`, `floate1`, `fma`, `fp-armv8`, `fp16`, `fp64`, `fpuv2_df`, `fpuv2_sf`, `fpuv3_df`, `fpuv3_hf`, `fpuv3_hi`, `fpuv3_sf`, `frecipe`, `frintts`, `fxsr`, `gfni`, `hard-float`, `hard-float-abi`, `hard-tp`, `high-registers`, `hvx`, `hvx-length128b`, `hwdiv`, `i8mm`, `jsconv`, `lahfsahf`, `lasx`, `lbt`, `lor`, `lse`, `lsx`, `lvz`, `lzcnt`, `m`, `mclass`, `movbe`, `mp`, `mp1e2`, `msa`, `mte`, `multivalue`, `mutable-globals`, `neon`, `nontrapping-fptoint`, `nvic`, `paca`, `pacg`, `pan`, `pclmulqdq`, `pmuv3`, `popcnt`, `power10-vector`, `power8-altivec`, `power8-vector`, `power9-altivec`, `power9-vector`, `prfchw`, `rand`, `ras`, `rclass`, `rcpc`, `rcpc2`, `rdm`, `rdrand`, `rdseed`, `reference-types`, `relax`, `relaxed-simd`, `rtm`, `sb`, `sha`, `sha2`, `sha3`, `sign-ext`, `simd128`, `sm4`, `spe`, `ssbs`, `sse`, `sse2`, `sse3`, `sse4.1`, `sse4.2`, `sse4a`, `ssse3`, `sve`, `sve2`, `sve2-aes`, `sve2-bitperm`, `sve2-sha3`, `sve2-sm4`, `tbm`, `thumb-mode`, `thumb2`, `tme`, `trust`, `trustzone`, `ual`, `unaligned-scalar-mem`, `v`, `v5te`, `v6`, `v6k`, `v6t2`, `v7`, `v8`, `v8.1a`, `v8.2a`, `v8.3a`, `v8.4a`, `v8.5a`, `v8.6a`, `v8.7a`, `vaes`, `vdsp2e60f`, `vdspv1`, `vdspv2`, `vfp2`, `vfp3`, `vfp4`, `vh`, `virt`, `virtualization`, `vpclmulqdq`, `vsx`, `xsave`, `xsavec`, `xsaveopt`, `xsaves`, `zba`, `zbb`, `zbc`, `zbkb`, `zbkc`, `zbkx`, `zbs`, `zdinx`, `zfh`, `zfhmin`, `zfinx`, `zhinx`, `zhinxmin`, `zk`, `zkn`, `zknd`, `zkne`, `zknh`, `zkr`, `zks`, `zksed`, `zksh`, and `zkt` = note: expected values for `target_feature` are: `10e60`, `2e3`, `3e3r1`, `3e3r2`, `3e3r3`, `3e7`, `7e10`, `a`, `aclass`, `adx`, `aes`, `altivec`, `alu32`, `atomics`, `avx`, `avx2`, `avx512bf16`, `avx512bitalg`, `avx512bw`, `avx512cd`, `avx512dq`, `avx512f`, `avx512fp16`, `avx512ifma`, `avx512vbmi`, `avx512vbmi2`, `avx512vl`, `avx512vnni`, `avx512vp2intersect`, `avx512vpopcntdq`, `bf16`, `bmi1`, `bmi2`, `bti`, `bulk-memory`, `c`, `cache`, `cmpxchg16b`, `crc`, `crt-static`, `d`, `d32`, `dit`, `doloop`, `dotprod`, `dpb`, `dpb2`, `dsp`, `dsp1e2`, `dspe60`, `e`, `e1`, `e2`, `edsp`, `elrw`, `ermsb`, `exception-handling`, `extended-const`, `f`, `f16c`, `f32mm`, `f64mm`, `fcma`, `fdivdu`, `fhm`, `flagm`, `float1e2`, `float1e3`, `float3e4`, `float7e60`, `floate1`, `fma`, `fp-armv8`, `fp16`, `fp64`, `fpuv2_df`, `fpuv2_sf`, `fpuv3_df`, `fpuv3_hf`, `fpuv3_hi`, `fpuv3_sf`, `frecipe`, `frintts`, `fxsr`, `gfni`, `hard-float`, `hard-float-abi`, `hard-tp`, `high-registers`, `hvx`, `hvx-length128b`, `hwdiv`, `i8mm`, `jsconv`, `lahfsahf`, `lasx`, `lbt`, `lor`, `lse`, `lsx`, `lvz`, `lzcnt`, `m`, `mclass`, `movbe`, `mp`, `mp1e2`, `msa`, `mte`, `multivalue`, `mutable-globals`, `neon`, `nontrapping-fptoint`, `nvic`, `paca`, `pacg`, `pan`, `pclmulqdq`, `pmuv3`, `popcnt`, `power10-vector`, `power8-altivec`, `power8-vector`, `power9-altivec`, `power9-vector`, `prfchw`, `rand`, `ras`, `rclass`, `rcpc`, `rcpc2`, `rdm`, `rdrand`, `rdseed`, `reference-types`, `relax`, `relaxed-simd`, `rtm`, `sb`, `sha`, `sha2`, `sha3`, `sign-ext`, `simd128`, `sm4`, `spe`, `ssbs`, `sse`, `sse2`, `sse3`, `sse4.1`, `sse4.2`, `sse4a`, `ssse3`, `sve`, `sve2`, `sve2-aes`, `sve2-bitperm`, `sve2-sha3`, `sve2-sm4`, `tbm`, `thumb-mode`, `thumb2`, `tme`, `trust`, `trustzone`, `ual`, `unaligned-scalar-mem`, `v`, `v5te`, `v6`, `v6k`, `v6t2`, `v7`, `v8`, `v8.1a`, `v8.2a`, `v8.3a`, `v8.4a`, `v8.5a`, `v8.6a`, `v8.7a`, `vaes`, `vdsp2e60f`, `vdspv1`, `vdspv2`, `vfp2`, `vfp3`, `vfp4`, `vh`, `virt`, `virtualization`, `vpclmulqdq`, `vsx`, `xsave`, `xsavec`, `xsaveopt`, `xsaves`, `zba`, `zbb`, `zbc`, `zbkb`, `zbkc`, `zbkx`, `zbs`, `zdinx`, `zfh`, `zfhmin`, `zfinx`, `zhinx`, `zhinxmin`, `zk`, `zkn`, `zknd`, `zkne`, `zknh`, `zkr`, `zks`, `zksed`, `zksh`, and `zkt` = note: see matches!(abi, SpecAbi::Rust | SpecAbi::RustCall | SpecAbi::RustIntrinsic) matches!( abi, SpecAbi::Rust | SpecAbi::RustCall | SpecAbi::RustCold | SpecAbi::RustIntrinsic ) } /// Find any fn-ptr types with external ABIs in `ty`.", "commid": "rust_pr_130667"}], "negative_passages": []}
{"query_id": "q-en-rust-8d9c1a3661356bfdf58db0ec2c24c2f76ace1a0396f38e4f264716f0f2f1ec3d", "query": "() Like and the other ABIs, should not trigger and for non-FFI-safe types. Related: , ; tracking issue . No response No response_", "positive_passages": [{"docid": "doc-en-rust-27a0ec0e26af47c44354400a16f0ef10ea5a9e85d96458e33e7d8e0fadbed124", "text": " //@ check-pass #![feature(rust_cold_cc)] // extern \"rust-cold\" is a \"Rust\" ABI so we accept `repr(Rust)` types as arg/ret without warnings. pub extern \"rust-cold\" fn f(_: ()) -> Result<(), ()> { Ok(()) } extern \"rust-cold\" { pub fn g(_: ()) -> Result<(), ()>; } fn main() {} ", "commid": "rust_pr_130667"}], "negative_passages": []}
{"query_id": "q-en-rust-99d5b1f75e53201a11da405ce1e3581f854c121b0855e6c1ad4e8b9537f27a6f", "query": "When downloading the (and variant) target, the that ships with the compiler is broken: Upon further inspection, the symbols are indeed built for x86-64 and not LoongArch: It's not clear what causes this, but contains no corresponding definitions for . This issue appears related, in that it would catch whatever causes this:\ncc", "positive_passages": [{"docid": "doc-en-rust-d61fa04713df96c055e94ca2fb8db9db90cfba759ac2952880298b6d37a042b5", "text": "AR_loongarch64_unknown_linux_gnu=loongarch64-unknown-linux-gnu-ar CXX_loongarch64_unknown_linux_gnu=loongarch64-unknown-linux-gnu-g++ # We re-use the Linux toolchain for bare-metal, because upstream bare-metal # target support for LoongArch is only available from GCC 14+. # # See: https://github.com/gcc-mirror/gcc/commit/976f4f9e4770 ENV CC_loongarch64_unknown_none=loongarch64-unknown-linux-gnu-gcc AR_loongarch64_unknown_none=loongarch64-unknown-linux-gnu-ar CXX_loongarch64_unknown_none=loongarch64-unknown-linux-gnu-g++ CFLAGS_loongarch64_unknown_none=\"-ffreestanding -mabi=lp64d\" CXXFLAGS_loongarch64_unknown_none=\"-ffreestanding -mabi=lp64d\" CC_loongarch64_unknown_none_softfloat=loongarch64-unknown-linux-gnu-gcc AR_loongarch64_unknown_none_softfloat=loongarch64-unknown-linux-gnu-ar CXX_loongarch64_unknown_none_softfloat=loongarch64-unknown-linux-gnu-g++ CFLAGS_loongarch64_unknown_none_softfloat=\"-ffreestanding -mabi=lp64s -mfpu=none\" CXXFLAGS_loongarch64_unknown_none_softfloat=\"-ffreestanding -mabi=lp64s -mfpu=none\" ENV HOSTS=loongarch64-unknown-linux-gnu ENV TARGETS=$HOSTS ENV TARGETS=$TARGETS,loongarch64-unknown-none ENV TARGETS=$TARGETS,loongarch64-unknown-none-softfloat ENV RUST_CONFIGURE_ARGS --enable-extended ", "commid": "rust_pr_127150"}], "negative_passages": []}
{"query_id": "q-en-rust-99d5b1f75e53201a11da405ce1e3581f854c121b0855e6c1ad4e8b9537f27a6f", "query": "When downloading the (and variant) target, the that ships with the compiler is broken: Upon further inspection, the symbols are indeed built for x86-64 and not LoongArch: It's not clear what causes this, but contains no corresponding definitions for . This issue appears related, in that it would catch whatever causes this:\ncc", "positive_passages": [{"docid": "doc-en-rust-bf34f7f24483ff5be36e5ad187d0e7753a0f9fe70be49b346a898715da130514", "text": "--enable-profiler --disable-docs ENV SCRIPT python3 ../x.py dist --host $HOSTS --target $HOSTS ENV SCRIPT python3 ../x.py dist --host $HOSTS --target $TARGETS ", "commid": "rust_pr_127150"}], "negative_passages": []}
{"query_id": "q-en-rust-99d5b1f75e53201a11da405ce1e3581f854c121b0855e6c1ad4e8b9537f27a6f", "query": "When downloading the (and variant) target, the that ships with the compiler is broken: Upon further inspection, the symbols are indeed built for x86-64 and not LoongArch: It's not clear what causes this, but contains no corresponding definitions for . This issue appears related, in that it would catch whatever causes this:\ncc", "positive_passages": [{"docid": "doc-en-rust-65934332ae8269fd7485b645a5fe8aa49babf7bbbdaf8c0ca31f9961b68355e4", "text": "ENV TARGETS=$TARGETS,armv7-unknown-linux-musleabi ENV TARGETS=$TARGETS,i686-unknown-freebsd ENV TARGETS=$TARGETS,x86_64-unknown-none ENV TARGETS=$TARGETS,loongarch64-unknown-none ENV TARGETS=$TARGETS,loongarch64-unknown-none-softfloat ENV TARGETS=$TARGETS,aarch64-unknown-uefi ENV TARGETS=$TARGETS,i686-unknown-uefi ENV TARGETS=$TARGETS,x86_64-unknown-uefi", "commid": "rust_pr_127150"}], "negative_passages": []}
{"query_id": "q-en-rust-0437dd8ba1fa0903e87a8b83f97256473d6ed1501c67f7aa6c8a6e93b258a164", "query": " $DIR/dangling-alloc-id-ice.rs:12:1 | LL | const FOO: &() = { | ^^^^^^^^^^^^^^ | ^^^^^^^^^^^^^^ constructing invalid value: encountered a dangling reference (use-after-free) | = note: The rules on what exactly is undefined behavior aren't clear, so this check might be overzealous. Please open an issue on the rustc repository if you believe it should not be considered undefined behavior. = note: the raw bytes of the constant (size: $SIZE, align: $ALIGN) { HEX_DUMP } error: aborting due to 1 previous error For more information about this error, try `rustc --explain E0080`. ", "commid": "rust_pr_126426"}], "negative_passages": []}
{"query_id": "q-en-rust-b1521adfb989f38f32348ff8b24b981929409aef73ea7602860c96db097d9127", "query": " $DIR/dangling-zst-ice-issue-126393.rs:7:1 | LL | pub static MAGIC_FFI_REF: &'static Wrapper = unsafe { | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ constructing invalid value: encountered a dangling reference (use-after-free) | = note: The rules on what exactly is undefined behavior aren't clear, so this check might be overzealous. Please open an issue on the rustc repository if you believe it should not be considered undefined behavior. = note: the raw bytes of the constant (size: $SIZE, align: $ALIGN) { HEX_DUMP } error: aborting due to 1 previous error For more information about this error, try `rustc --explain E0080`. ", "commid": "rust_pr_126426"}], "negative_passages": []}
{"query_id": "q-en-rust-7ad0b1d7110279401731d0f8966d551330e698e91ad9b40ad0a37050a3d2d1bd", "query": " $DIR/uninhabited.rs:65:9 --> $DIR/uninhabited.rs:63:9 | LL | assert!(false); | ^^^^^^^^^^^^^^ the evaluated program panicked at 'assertion failed: false', $DIR/uninhabited.rs:65:9 | ^^^^^^^^^^^^^^ the evaluated program panicked at 'assertion failed: false', $DIR/uninhabited.rs:63:9 | = note: this error originates in the macro `assert` (in Nightly builds, run with -Z macro-backtrace for more info) error[E0080]: evaluation of constant value failed --> $DIR/uninhabited.rs:87:9 | LL | assert!(false); | ^^^^^^^^^^^^^^ the evaluated program panicked at 'assertion failed: false', $DIR/uninhabited.rs:87:9 | = note: this error originates in the macro `assert` (in Nightly builds, run with -Z macro-backtrace for more info)", "commid": "rust_pr_126493"}], "negative_passages": []}
{"query_id": "q-en-rust-7ad0b1d7110279401731d0f8966d551330e698e91ad9b40ad0a37050a3d2d1bd", "query": " $DIR/uninhabited.rs:49:41 | LL | assert::is_maybe_transmutable::<(), Void>(); | ^^^^ `yawning_void::Void` is uninhabited | ^^^^ `yawning_void_struct::Void` is uninhabited | note: required by a bound in `is_maybe_transmutable` --> $DIR/uninhabited.rs:10:14 | LL | pub fn is_maybe_transmutable // treat PhantomData and positional ZST as public, // we don't want to lint types which only have them, // cause it's a common way to use such types to check things like well-formedness tcx.adt_def(id).all_fields().all(|field| { let adt_def = tcx.adt_def(id); // skip types contain fields of unit and never type, // it's usually intentional to make the type not constructible let not_require_constructor = adt_def.all_fields().any(|field| { let field_type = tcx.type_of(field.did).instantiate_identity(); if field_type.is_phantom_data() { return true; } let is_positional = field.name.as_str().starts_with(|c: char| c.is_ascii_digit()); if is_positional && tcx .layout_of(tcx.param_env(field.did).and(field_type)) .map_or(true, |layout| layout.is_zst()) { return true; } field.vis.is_public() }) field_type.is_unit() || field_type.is_never() }); not_require_constructor || adt_def.all_fields().all(|field| { let field_type = tcx.type_of(field.did).instantiate_identity(); // skip fields of PhantomData, // cause it's a common way to check things like well-formedness if field_type.is_phantom_data() { return true; } field.vis.is_public() }) } /// check struct and its fields are public or not,", "commid": "rust_pr_128104"}], "negative_passages": []}
{"query_id": "q-en-rust-618a64eeb156c9d000d922e4741d61762a493f4016a9dec1460e9214789d0eae", "query": "The struct is meant to not be constructible. Also the behavior is incoherent between tuple structs (correct behavior) and named structs (incorrect behavior). Also, this is a regression, this didn't use to be the case. No response No response\nlabel +F-never_Type +D-inconsistent\nWe will also get such warnings for private structs with fields of never type (): The struct is never constructed although it is not constructible. This behavior is expected and is not a regression. Did you meet any cases using this way in real world? For the incoherent behavior, we also have: This because we only skip positional ZST (PhantomData and generics), this policy is also applied for never read fields:\nCan you elaborate in which sense it is expected? I can understand that it is expected given the current implementation of Rust. It seems between stable and nightly a new lint was implemented. This lint is expected for inhabited types. However it has false positives on inhabited types, i.e. types that are only meant for type-level usage (for example to implement a trait). I agree we should probably categorize this issue as a false positive of a new lint rather than a regression (technically). Thanks, I can see the idea behind it. I'm not convinced about it though, but it doesn't bother me either. So let's just keep this issue focused on the false positive rather than the inconsistent behavior.\nHow do you think about private structs with fields of never type?\nI would consider it the same as public structs, because they can still be \"constructed\" at the type-level. So they are dynamic dead--code but not dead-code since they are used statically. It would be dead-code though if the type is never used (both in dynamic code and static code, i.e. types).\nActually, thinking about this issue, I don't think this is a false positive only for the never type. I think the whole lint is wrong. The fact that the type is empty is just a proof that constructing it at term-level is not necessary for the type to be \"alive\" (aka usable). This could also be the case with non-empty types. A lot of people just use unit types for type-level usage only (instead of empty types). So my conclusion would be that linting a type to be dead-code because it is never constructed is brittle as it can't be sure that the type is not meant to be constructed at term-level (and thus only used at type-level). Adding is not a solution because if the type is actually not used at all (including in types) then we actually want the warning. Here is an example: The same thing with a private struct: And now actual dead code:\nI think a better way is: is usually used to do such things like just a proof. Maybe we can add a help for this case.\nBut I agree that we shouldn't emit warnings for pub structs with any fields of never type (and also unit type), this is an intentional behavior.\nOh yes good point. So all good to proceed with adding never type to the list of special types.\nThanks a lot for the quick fix! Once it hits nightly, I'll give it a try. I'm a bit afraid though that this won't fix the underlying problem of understanding which types are not meant to exist at runtime. For example, sometimes instead of using the never type, I also use empty enums such that I can give a name and separate identity to that type. Let's see if it's an issue. If yes, I guess ultimately there would be 2 options: Just use in those cases. Introduce a trait (or better name) for types that are not meant to exist at runtime like unit, never, and phantom types, but could be extended with user types like empty enums. I'll ping this thread again if I actually hit this issue.\nI could test it (using nightly-2024-08-01) and . I just had to change a to in a crate where I didn't have already. Ultimately, is going to be a type alias for so maybe it's not worth supporting it. And thinking about empty enums, this should never be an issue for my particular usage, which is to build empty types. Because either the type takes no parameters in which case I use an empty enum directly. Or it takes parameters and I have to use a struct with fields, and to make the struct empty I also have to add a never field. So my concern in the previous message is not an issue for me.", "positive_passages": [{"docid": "doc-en-rust-df4373c2782bcc924202fa885b51dc980e0fea46448d7254a61d1435ca835dc3", "text": "#![forbid(dead_code)] #[derive(Debug)] pub struct Whatever { //~ ERROR struct `Whatever` is never constructed pub struct Whatever { pub field0: (), field1: (), field1: (), //~ ERROR fields `field1`, `field2`, `field3`, and `field4` are never read field2: (), field3: (), field4: (),", "commid": "rust_pr_128104"}], "negative_passages": []}
{"query_id": "q-en-rust-618a64eeb156c9d000d922e4741d61762a493f4016a9dec1460e9214789d0eae", "query": "The struct is meant to not be constructible. Also the behavior is incoherent between tuple structs (correct behavior) and named structs (incorrect behavior). Also, this is a regression, this didn't use to be the case. No response No response\nlabel +F-never_Type +D-inconsistent\nWe will also get such warnings for private structs with fields of never type (): The struct is never constructed although it is not constructible. This behavior is expected and is not a regression. Did you meet any cases using this way in real world? For the incoherent behavior, we also have: This because we only skip positional ZST (PhantomData and generics), this policy is also applied for never read fields:\nCan you elaborate in which sense it is expected? I can understand that it is expected given the current implementation of Rust. It seems between stable and nightly a new lint was implemented. This lint is expected for inhabited types. However it has false positives on inhabited types, i.e. types that are only meant for type-level usage (for example to implement a trait). I agree we should probably categorize this issue as a false positive of a new lint rather than a regression (technically). Thanks, I can see the idea behind it. I'm not convinced about it though, but it doesn't bother me either. So let's just keep this issue focused on the false positive rather than the inconsistent behavior.\nHow do you think about private structs with fields of never type?\nI would consider it the same as public structs, because they can still be \"constructed\" at the type-level. So they are dynamic dead--code but not dead-code since they are used statically. It would be dead-code though if the type is never used (both in dynamic code and static code, i.e. types).\nActually, thinking about this issue, I don't think this is a false positive only for the never type. I think the whole lint is wrong. The fact that the type is empty is just a proof that constructing it at term-level is not necessary for the type to be \"alive\" (aka usable). This could also be the case with non-empty types. A lot of people just use unit types for type-level usage only (instead of empty types). So my conclusion would be that linting a type to be dead-code because it is never constructed is brittle as it can't be sure that the type is not meant to be constructed at term-level (and thus only used at type-level). Adding is not a solution because if the type is actually not used at all (including in types) then we actually want the warning. Here is an example: The same thing with a private struct: And now actual dead code:\nI think a better way is: is usually used to do such things like just a proof. Maybe we can add a help for this case.\nBut I agree that we shouldn't emit warnings for pub structs with any fields of never type (and also unit type), this is an intentional behavior.\nOh yes good point. So all good to proceed with adding never type to the list of special types.\nThanks a lot for the quick fix! Once it hits nightly, I'll give it a try. I'm a bit afraid though that this won't fix the underlying problem of understanding which types are not meant to exist at runtime. For example, sometimes instead of using the never type, I also use empty enums such that I can give a name and separate identity to that type. Let's see if it's an issue. If yes, I guess ultimately there would be 2 options: Just use in those cases. Introduce a trait (or better name) for types that are not meant to exist at runtime like unit, never, and phantom types, but could be extended with user types like empty enums. I'll ping this thread again if I actually hit this issue.\nI could test it (using nightly-2024-08-01) and . I just had to change a to in a crate where I didn't have already. Ultimately, is going to be a type alias for so maybe it's not worth supporting it. And thinking about empty enums, this should never be an issue for my particular usage, which is to build empty types. Because either the type takes no parameters in which case I use an empty enum directly. Or it takes parameters and I have to use a struct with fields, and to make the struct empty I also have to add a never field. So my concern in the previous message is not an issue for me.", "positive_passages": [{"docid": "doc-en-rust-7d49cb0c1c28911544ef219b4dd6e3e21f6fcc4ce45880631ff8903bd807697f", "text": " error: struct `Whatever` is never constructed --> $DIR/clone-debug-dead-code-in-the-same-struct.rs:4:12 error: fields `field1`, `field2`, `field3`, and `field4` are never read --> $DIR/clone-debug-dead-code-in-the-same-struct.rs:6:5 | LL | pub struct Whatever { | ^^^^^^^^ | -------- fields in this struct LL | pub field0: (), LL | field1: (), | ^^^^^^ LL | field2: (), | ^^^^^^ LL | field3: (), | ^^^^^^ LL | field4: (), | ^^^^^^ | = note: `Whatever` has a derived impl for the trait `Debug`, but this is intentionally ignored during dead code analysis note: the lint level is defined here --> $DIR/clone-debug-dead-code-in-the-same-struct.rs:1:11 |", "commid": "rust_pr_128104"}], "negative_passages": []}
{"query_id": "q-en-rust-618a64eeb156c9d000d922e4741d61762a493f4016a9dec1460e9214789d0eae", "query": "The struct is meant to not be constructible. Also the behavior is incoherent between tuple structs (correct behavior) and named structs (incorrect behavior). Also, this is a regression, this didn't use to be the case. No response No response\nlabel +F-never_Type +D-inconsistent\nWe will also get such warnings for private structs with fields of never type (): The struct is never constructed although it is not constructible. This behavior is expected and is not a regression. Did you meet any cases using this way in real world? For the incoherent behavior, we also have: This because we only skip positional ZST (PhantomData and generics), this policy is also applied for never read fields:\nCan you elaborate in which sense it is expected? I can understand that it is expected given the current implementation of Rust. It seems between stable and nightly a new lint was implemented. This lint is expected for inhabited types. However it has false positives on inhabited types, i.e. types that are only meant for type-level usage (for example to implement a trait). I agree we should probably categorize this issue as a false positive of a new lint rather than a regression (technically). Thanks, I can see the idea behind it. I'm not convinced about it though, but it doesn't bother me either. So let's just keep this issue focused on the false positive rather than the inconsistent behavior.\nHow do you think about private structs with fields of never type?\nI would consider it the same as public structs, because they can still be \"constructed\" at the type-level. So they are dynamic dead--code but not dead-code since they are used statically. It would be dead-code though if the type is never used (both in dynamic code and static code, i.e. types).\nActually, thinking about this issue, I don't think this is a false positive only for the never type. I think the whole lint is wrong. The fact that the type is empty is just a proof that constructing it at term-level is not necessary for the type to be \"alive\" (aka usable). This could also be the case with non-empty types. A lot of people just use unit types for type-level usage only (instead of empty types). So my conclusion would be that linting a type to be dead-code because it is never constructed is brittle as it can't be sure that the type is not meant to be constructed at term-level (and thus only used at type-level). Adding is not a solution because if the type is actually not used at all (including in types) then we actually want the warning. Here is an example: The same thing with a private struct: And now actual dead code:\nI think a better way is: is usually used to do such things like just a proof. Maybe we can add a help for this case.\nBut I agree that we shouldn't emit warnings for pub structs with any fields of never type (and also unit type), this is an intentional behavior.\nOh yes good point. So all good to proceed with adding never type to the list of special types.\nThanks a lot for the quick fix! Once it hits nightly, I'll give it a try. I'm a bit afraid though that this won't fix the underlying problem of understanding which types are not meant to exist at runtime. For example, sometimes instead of using the never type, I also use empty enums such that I can give a name and separate identity to that type. Let's see if it's an issue. If yes, I guess ultimately there would be 2 options: Just use in those cases. Introduce a trait (or better name) for types that are not meant to exist at runtime like unit, never, and phantom types, but could be extended with user types like empty enums. I'll ping this thread again if I actually hit this issue.\nI could test it (using nightly-2024-08-01) and . I just had to change a to in a crate where I didn't have already. Ultimately, is going to be a type alias for so maybe it's not worth supporting it. And thinking about empty enums, this should never be an issue for my particular usage, which is to build empty types. Because either the type takes no parameters in which case I use an empty enum directly. Or it takes parameters and I have to use a struct with fields, and to make the struct empty I also have to add a never field. So my concern in the previous message is not an issue for me.", "positive_passages": [{"docid": "doc-en-rust-0d2558f6a742582c622a646607fbb198ef44d9a075f9f76b9ac44bbd9ac2621c", "text": " #![feature(never_type)] #![deny(dead_code)] pub struct T1(!); pub struct T2(()); pub struct T3 version = \"0.1.114\" version = \"0.1.117\" source = \"registry+https://github.com/rust-lang/crates.io-index\" checksum = \"eb58b199190fcfe0846f55a3b545cd6b07a34bdd5930a476ff856f3ebcc5558a\" checksum = \"a91dae36d82fe12621dfb5b596d7db766187747749b22e33ac068e1bfc356f4a\" dependencies = [ \"cc\", \"rustc-std-workspace-core\",", "commid": "rust_pr_128691"}], "negative_passages": []}
{"query_id": "q-en-rust-39b479843bd281aed80f459adc0cce2fce8a63b9afa1b90f5aec5eaa619384d4", "query": "I do not have minimised code. This was a sudden CI build failure on nightly for aarch64-unknown-linux-musl () Note that this is a cross compile using cross-rs. It fails on rust version 1.82.0-nightly ( 2024-07-29) but works on 1.82.0-nightly ( 2024-07-28). Needless to say it also works perfectly fine on stable. The error is reproducible on rebuild and locally with cross-rs. It is not reproducible with cargo-zigbuild interestingly enough. I also checked that the version of cross-rs didn't change between the successful and failing builds (no new releases of that project since february). The only change between the successful and failing CI runs were a typo fix to an unrelated CI pipeline (the release one, so it didn't even run yet). As such I'm forced to conclude that the blame lies with Rustc. Steps to reproduce: I expected to see this happen: Compilation successful (as it is with stable for same target, or with -gnu and nightly) Instead, this happened () It most recently worked on: 1.82.0-nightly ( 2024-07-28) (or stable 1.80.0 or beta 1.81.0, take your pick for triaging) :\ncc (of cross-rs fame) in case he has any insight on this regression\nI have hit the same issue in a similar situation. Cross compiling to aarch64-unknown-linux-musl from x86_64 linux in a Github CI pipeline using cross-rs. Link to pipeline failing logs:\nSeems rune isn't needed based on the comment by Updated the title.\nThis is probably\nWG-prioritization assigning priority (). label -I-prioritize +P-high\nYeah that's probably it. We used to build the C version of those symbols but switched to Rust versions with the compiler builtins update. Unfortunately we can't enable them on all platforms because LLVM has crashes, and the logic that would allow optional configuration is apparently not working. After that part is fixed, we will need to enable the symbols on all platforms where is f128 (which includes aarch64), since these errors probably come from here I think anyone who needs a quick workaround can add to their dependencies to get the symbols directly, as described near the top of Note that this requires nightly.\nIs it possible to add a dependency conditionally based on if it is nightly? (In a specific range of dates even?) I cannot and will not depend on nightly, but I do test my code on nightly in CI (in order to find breakages like this early). For now I just disabled that nightly build on aarch64 musl.\nSame for me. Not using cross-rs though. Logs are here:\nSame for me, both using as host toolchain and cross compiling.\nLooks like it is happening now too for x86_64.\nNot for my projects, so you should probably provide a reproducer on x86-64 GNU if possible, since that is a tier 1 platform (unlike Aarch64 on Musl). Or is this about some other x8664 variation that is also tier 2?\nWoops, sorry, the x86_64 build failed too, but then i looked at the aarch64 build logs :) which still fail.", "positive_passages": [{"docid": "doc-en-rust-458097da950ec73b1580b21c6e2853992946e901d33970069de8e47ddd9f26b4", "text": "[dependencies] core = { path = \"../core\" } compiler_builtins = { version = \"0.1.114\", features = ['rustc-dep-of-std'] } [target.'cfg(not(any(target_arch = \"aarch64\", target_arch = \"x86\", target_arch = \"x86_64\")))'.dependencies] compiler_builtins = { version = \"0.1.114\", features = [\"no-f16-f128\"] } compiler_builtins = { version = \"0.1.117\", features = ['rustc-dep-of-std'] } [dev-dependencies] rand = { version = \"0.8.5\", default-features = false, features = [\"alloc\"] }", "commid": "rust_pr_128691"}], "negative_passages": []}
{"query_id": "q-en-rust-39b479843bd281aed80f459adc0cce2fce8a63b9afa1b90f5aec5eaa619384d4", "query": "I do not have minimised code. This was a sudden CI build failure on nightly for aarch64-unknown-linux-musl () Note that this is a cross compile using cross-rs. It fails on rust version 1.82.0-nightly ( 2024-07-29) but works on 1.82.0-nightly ( 2024-07-28). Needless to say it also works perfectly fine on stable. The error is reproducible on rebuild and locally with cross-rs. It is not reproducible with cargo-zigbuild interestingly enough. I also checked that the version of cross-rs didn't change between the successful and failing builds (no new releases of that project since february). The only change between the successful and failing CI runs were a typo fix to an unrelated CI pipeline (the release one, so it didn't even run yet). As such I'm forced to conclude that the blame lies with Rustc. Steps to reproduce: I expected to see this happen: Compilation successful (as it is with stable for same target, or with -gnu and nightly) Instead, this happened () It most recently worked on: 1.82.0-nightly ( 2024-07-28) (or stable 1.80.0 or beta 1.81.0, take your pick for triaging) :\ncc (of cross-rs fame) in case he has any insight on this regression\nI have hit the same issue in a similar situation. Cross compiling to aarch64-unknown-linux-musl from x86_64 linux in a Github CI pipeline using cross-rs. Link to pipeline failing logs:\nSeems rune isn't needed based on the comment by Updated the title.\nThis is probably\nWG-prioritization assigning priority (). label -I-prioritize +P-high\nYeah that's probably it. We used to build the C version of those symbols but switched to Rust versions with the compiler builtins update. Unfortunately we can't enable them on all platforms because LLVM has crashes, and the logic that would allow optional configuration is apparently not working. After that part is fixed, we will need to enable the symbols on all platforms where is f128 (which includes aarch64), since these errors probably come from here I think anyone who needs a quick workaround can add to their dependencies to get the symbols directly, as described near the top of Note that this requires nightly.\nIs it possible to add a dependency conditionally based on if it is nightly? (In a specific range of dates even?) I cannot and will not depend on nightly, but I do test my code on nightly in CI (in order to find breakages like this early). For now I just disabled that nightly build on aarch64 musl.\nSame for me. Not using cross-rs though. Logs are here:\nSame for me, both using as host toolchain and cross compiling.\nLooks like it is happening now too for x86_64.\nNot for my projects, so you should probably provide a reproducer on x86-64 GNU if possible, since that is a tier 1 platform (unlike Aarch64 on Musl). Or is this about some other x8664 variation that is also tier 2?\nWoops, sorry, the x86_64 build failed too, but then i looked at the aarch64 build logs :) which still fail.", "positive_passages": [{"docid": "doc-en-rust-c70c61db5e9da4571b1e639badafd453f8a77b508d9c7f0a20bd5aa1cec20d8d", "text": "panic_unwind = { path = \"../panic_unwind\", optional = true } panic_abort = { path = \"../panic_abort\" } core = { path = \"../core\", public = true } compiler_builtins = { version = \"0.1.114\" } compiler_builtins = { version = \"0.1.117\" } profiler_builtins = { path = \"../profiler_builtins\", optional = true } unwind = { path = \"../unwind\" } hashbrown = { version = \"0.14\", default-features = false, features = [", "commid": "rust_pr_128691"}], "negative_passages": []}
{"query_id": "q-en-rust-7d1a44241c0b1b81e77c871231dc25d1d0e136681c52dc9962b5a1c460da767c", "query": "Found by grepping through the compiler via while reviewing an unrelated PR. File : File : : . or move source file . : . #[suggestion( code = \"Box::new({code})\", applicability = \"machine-applicable\", style = \"verbose\" )] pub span: Span, pub code: &'a str, #[subdiagnostic] pub sugg: AddBoxNew, } #[derive(Subdiagnostic)] #[multipart_suggestion( parse_box_syntax_removed_suggestion, applicability = \"machine-applicable\", style = \"verbose\" )] pub struct AddBoxNew { #[suggestion_part(code = \"Box::new(\")] pub box_kw_and_lo: Span, #[suggestion_part(code = \")\")] pub hi: Span, } #[derive(Diagnostic)]", "commid": "rust_pr_128496"}], "negative_passages": []}
{"query_id": "q-en-rust-7d1a44241c0b1b81e77c871231dc25d1d0e136681c52dc9962b5a1c460da767c", "query": "Found by grepping through the compiler via while reviewing an unrelated PR. File : File : : . or move source file . : . let (span, _) = self.parse_expr_prefix_common(box_kw)?; let inner_span = span.with_lo(box_kw.hi()); let code = self.psess.source_map().span_to_snippet(inner_span).unwrap(); let guar = self.dcx().emit_err(errors::BoxSyntaxRemoved { span: span, code: code.trim() }); let (span, expr) = self.parse_expr_prefix_common(box_kw)?; // Make a multipart suggestion instead of `span_to_snippet` in case source isn't available let box_kw_and_lo = box_kw.until(self.interpolated_or_expr_span(&expr)); let hi = span.shrink_to_hi(); let sugg = errors::AddBoxNew { box_kw_and_lo, hi }; let guar = self.dcx().emit_err(errors::BoxSyntaxRemoved { span, sugg }); Ok((span, ExprKind::Err(guar))) }", "commid": "rust_pr_128496"}], "negative_passages": []}
{"query_id": "q-en-rust-7d1a44241c0b1b81e77c871231dc25d1d0e136681c52dc9962b5a1c460da767c", "query": "Found by grepping through the compiler via while reviewing an unrelated PR. File : File : : . or move source file . : . | ~~~~~~~~~~~~ | ~~~~~~~~~ + error: aborting due to 5 previous errors", "commid": "rust_pr_128496"}], "negative_passages": []}
{"query_id": "q-en-rust-ee2e76fa3f7f37b887c4872309d16214ee91a201697970e4a57f1a2ff2110969", "query": " $DIR/inline-tainted-body.rs:7:21 | LL | pub struct WeakOnce warning-crlf.rs eol=crlf warning-crlf.rs -text ", "commid": "rust_pr_128755"}], "negative_passages": []}
{"query_id": "q-en-rust-adf668ef77fe46470cf478c31ac88114f86def44be52d776c44ee99ad60314a4", "query": "Right now using to manage the rust-lang/rust repo locally causes to fail: This is a result of which is responsible for adding the character to the failing test. While I would love to get that support to jj upstream I'm assuming it is non-trivial and I'd like a better solution in the short term so I can continue to use while contributing to the compiler. suggested I use in the meantime which I'm fine with but I'd like a way to set this in my or some equivalent mechanism doesn't require manually adding this set of arguments to every invocation of I make.\nnote this is essentially the same as i actually would like an even more general solution, where the flags and configs are completely interchangable and you can specify either in either place, but a solution just for seems ok too. label T-bootstrap A-contributor-roadblock\nSo it turns out that won't work as a short term fix even because does not work with tidy atm. For now I'm going with but I expect this to break frequently and be a source of annoyance so I'll be looking towards fixing this soon if it does in fact end up being a problem.\nre crlf, as a source control expert I'd recommend never using any kind of autoconversion -- it is a bit of a nightmare to work with in too many cases. Rather, I'd consider checking in the file with CRLFs, and disabling autoconversion so files are always checked out as-is. (Another option is to effectively treat the file as binary content, e.g. use some other means of encoding the file, such as base64 or putting it in a tarball. But I don't know how practical that is)", "positive_passages": [{"docid": "doc-en-rust-d7e7843b75e4d048bc140a8fc94e037675fe83706daf54565e7ebfd3939a7fbe", "text": " // ignore-tidy-cr //@ check-pass // This file checks the spans of intra-link warnings in a file with CRLF line endings. The // .gitattributes file in this directory should enforce it. /// [error] pub struct A; //~^^ WARNING `error` /// /// docs [error1] //~^ WARNING `error1` /// docs [error2] /// pub struct B; //~^^^ WARNING `error2` /** * This is a multi-line comment. * * It also has an [error]. */ pub struct C; //~^^^ WARNING `error` // ignore-tidy-cr //@ check-pass // This file checks the spans of intra-link warnings in a file with CRLF line endings. The // .gitattributes file in this directory should enforce it. /// [error] pub struct A; //~^^ WARNING `error` /// /// docs [error1] //~^ WARNING `error1` /// docs [error2] /// pub struct B; //~^^^ WARNING `error2` /** * This is a multi-line comment. * * It also has an [error]. */ pub struct C; //~^^^ WARNING `error` ", "commid": "rust_pr_128755"}], "negative_passages": []}
{"query_id": "q-en-rust-597ef8939b679433bc04ca082ead1a6e979b471e6d5cfd80d8692c02d73316fc", "query": " $DIR/suggest-arg-comma-delete-ice.rs:15:14 | LL | main(rahh\uff09; | ^^ | help: Unicode character '\uff09' (Fullwidth Right Parenthesis) looks like ')' (Right Parenthesis), but it is not | LL | main(rahh); | ~ error[E0425]: cannot find value `rahh` in this scope --> $DIR/suggest-arg-comma-delete-ice.rs:15:10 | LL | main(rahh\uff09; | ^^^^ not found in this scope error[E0061]: this function takes 0 arguments but 1 argument was supplied --> $DIR/suggest-arg-comma-delete-ice.rs:15:5 | LL | main(rahh\uff09; | ^^^^ ---- unexpected argument | note: function defined here --> $DIR/suggest-arg-comma-delete-ice.rs:11:4 | LL | fn main() { | ^^^^ help: remove the extra argument | LL - main(rahh\uff09; LL + main(\uff09; | error: aborting due to 3 previous errors Some errors have detailed explanations: E0061, E0425. For more information about an error, try `rustc --explain E0061`. ", "commid": "rust_pr_128864"}], "negative_passages": []}
{"query_id": "q-en-rust-c3be9ff8c7b370c66d2cc2e5fc53e059aa7da85c6e30fdb436f526a8522bd11f", "query": " $DIR/suggest-remove-compount-assign-let-ice.rs:13:11 | LL | let x \u2796= 1; | ^^ | help: Unicode character '\u2796' (Heavy Minus Sign) looks like '-' (Minus/Hyphen), but it is not | LL | let x -= 1; | ~ error: can't reassign to an uninitialized variable --> $DIR/suggest-remove-compount-assign-let-ice.rs:13:11 | LL | let x \u2796= 1; | ^^^ | = help: if you meant to overwrite, remove the `let` binding help: initialize the variable | LL - let x \u2796= 1; LL + let x = 1; | error: aborting due to 2 previous errors ", "commid": "rust_pr_128865"}], "negative_passages": []}
{"query_id": "q-en-rust-1b2872af88c774857e577873fafa7afe90711e18518c693fd389f4a7c836e231", "query": "Since GDB used on CI is recent enough to run the new tests. However, there are three failures that have been ignored in : // gdb-check:$3 = option_like_enum::MoreFields::Full(454545, 0x87654321, 9988) // gdb-check:$3 = option_like_enum::MoreFields::Full(454545, 0x[...], 9988) // gdb-command:print empty_gdb.discr // gdb-check:$4 = (*mut isize) 0x1 // gdb-command:print empty // gdb-check:$4 = option_like_enum::MoreFields::Empty // gdb-command:print droid // gdb-check:$5 = option_like_enum::NamedFields::Droid{id: 675675, range: 10000001, internals: 0x43218765} // gdb-check:$5 = option_like_enum::NamedFields::Droid{id: 675675, range: 10000001, internals: 0x[...]} // gdb-command:print void_droid_gdb.internals // gdb-check:$6 = (*mut isize) 0x1 // gdb-command:print void_droid // gdb-check:$6 = option_like_enum::NamedFields::Void // gdb-command:print nested_non_zero_yep // gdb-check:$7 = option_like_enum::NestedNonZero::Yep(10.5, option_like_enum::NestedNonZeroField {a: 10, b: 20, c: 0x[...]})", "commid": "rust_pr_129672"}], "negative_passages": []}
{"query_id": "q-en-rust-1b2872af88c774857e577873fafa7afe90711e18518c693fd389f4a7c836e231", "query": "Since GDB used on CI is recent enough to run the new tests. However, there are three failures that have been ignored in : // lldb-check:[...] Full(454545, &0x87654321, 9988) // lldb-check:[...] Full(454545, &0x[...], 9988) // lldb-command:v empty // lldb-check:[...] Empty // lldb-command:v droid // lldb-check:[...] Droid { id: 675675, range: 10000001, internals: &0x43218765 } // lldb-check:[...] Droid { id: 675675, range: 10000001, internals: &0x[...] } // lldb-command:v void_droid // lldb-check:[...] Void", "commid": "rust_pr_129672"}], "negative_passages": []}
{"query_id": "q-en-rust-1b2872af88c774857e577873fafa7afe90711e18518c693fd389f4a7c836e231", "query": "Since GDB used on CI is recent enough to run the new tests. However, there are three failures that have been ignored in : let some: Option<&u32> = Some(unsafe { std::mem::transmute(0x12345678_usize) }); let some: Option<&u32> = Some(&1234); let none: Option<&u32> = None; let full = MoreFields::Full(454545, unsafe { std::mem::transmute(0x87654321_usize) }, 9988); let full = MoreFields::Full(454545, &1234, 9988); let empty = MoreFields::Empty; let empty_gdb: &MoreFieldsRepr = unsafe { std::mem::transmute(&MoreFields::Empty) }; let droid = NamedFields::Droid { id: 675675, range: 10000001, internals: unsafe { std::mem::transmute(0x43218765_usize) } internals: &1234, }; let void_droid = NamedFields::Void; let void_droid_gdb: &NamedFieldsRepr = unsafe { std::mem::transmute(&NamedFields::Void) }; let x = 'x'; let nested_non_zero_yep = NestedNonZero::Yep( 10.5, NestedNonZeroField { a: 10, b: 20, c: &x c: &'x', }); let nested_non_zero_nope = NestedNonZero::Nope; zzz(); // #break", "commid": "rust_pr_129672"}], "negative_passages": []}
{"query_id": "q-en-rust-e7a320c98519d6fc4b2fd32e4b6eef4cdfa5f12d2bc2a3a5c7d0e5b9670e2657", "query": " $DIR/unused-parens-for-stmt-expr-attributes-issue-129833.rs:9:13 | LL | let _ = (#[inline] #[allow(dead_code)] || println!(\"Hello!\")); | ^ ^ | note: the lint level is defined here --> $DIR/unused-parens-for-stmt-expr-attributes-issue-129833.rs:6:9 | LL | #![deny(unused_parens)] | ^^^^^^^^^^^^^ help: remove these parentheses | LL - let _ = (#[inline] #[allow(dead_code)] || println!(\"Hello!\")); LL + let _ = #[inline] #[allow(dead_code)] || println!(\"Hello!\"); | error: unnecessary parentheses around block return value --> $DIR/unused-parens-for-stmt-expr-attributes-issue-129833.rs:10:5 | LL | (#[inline] #[allow(dead_code)] || println!(\"Hello!\")) | ^ ^ | help: remove these parentheses | LL - (#[inline] #[allow(dead_code)] || println!(\"Hello!\")) LL + #[inline] #[allow(dead_code)] || println!(\"Hello!\") | error: aborting due to 2 previous errors ", "commid": "rust_pr_131546"}], "negative_passages": []}
{"query_id": "q-en-rust-5923fe45b718fab55ab35d21c8370569fc6087fac5f9f0992fc6d95dce139465", "query": "Ferrocene CI has detected that this test was broken by rust-lang/rust . Specifically, by the change in , shown below: Test output: Reverting that single line diff fixes the test for the QNX7.1 targets, e.g. and can the change be reverted or does QNX7.0 need to use , instead of , here? from looking at libc, it appears that both and are available on QNX7.0 so reverting the change should at least not cause compilation or linking errors. if the case is the latter, then we should use in addition to .\nthanks for the report! I will need to re-test it for QNX 7.0, to see if it is a 7.0 requirement (i don't recall why this was required initially, but clearly I ran into some issues with it). I also plan to do a similar automation to ensure 7.0 builds run without problems on our hardware. Is there a list of tests you see failing on 7.1 that are OK to ignore? I see a list - is this the most relevant one? Thx!\nwe do not ignore any single unit test at the moment and we run library (e.g. libstd) tests as well as (cross) compilation tests using . we don't pass to but instead pass the list of test suites as arguments. we currently run these test suites:\nWG-prioritization assigning priority (). label -I-prioritize +P-low\nfixed in", "positive_passages": [{"docid": "doc-en-rust-858956bfbeacc4a26c50bcf4452304c01e84baadaa75e070afdc168fcca7090c", "text": "run_path_with_cstr(original, &|original| { run_path_with_cstr(link, &|link| { cfg_if::cfg_if! { if #[cfg(any(target_os = \"vxworks\", target_os = \"redox\", target_os = \"android\", target_os = \"espidf\", target_os = \"horizon\", target_os = \"vita\", target_os = \"nto\"))] { if #[cfg(any(target_os = \"vxworks\", target_os = \"redox\", target_os = \"android\", target_os = \"espidf\", target_os = \"horizon\", target_os = \"vita\", target_env = \"nto70\"))] { // VxWorks, Redox and ESP-IDF lack `linkat`, so use `link` instead. POSIX leaves // it implementation-defined whether `link` follows symlinks, so rely on the // `symlink_hard_link` test in library/std/src/fs/tests.rs to check the behavior.", "commid": "rust_pr_130248"}], "negative_passages": []}
{"query_id": "q-en-rust-a4b53c3645bc265e33649832a59ad749f3726a8fed73b58b68318620ba8777f9", "query": " $DIR/global-cache-and-parallel-frontend.rs:15:17 | LL | #[derive(Clone, Eq)] | ^^ the trait `Clone` is not implemented for `T`, which is required by `Struct pub fn is_async(self) -> bool { matches!(self, CoroutineKind::Async { .. }) } pub fn is_gen(self) -> bool { matches!(self, CoroutineKind::Gen { .. }) pub fn as_str(self) -> &'static str { match self { CoroutineKind::Async { .. } => \"async\", CoroutineKind::Gen { .. } => \"gen\", CoroutineKind::AsyncGen { .. } => \"async gen\", } } pub fn closure_id(self) -> NodeId {", "commid": "rust_pr_130252"}], "negative_passages": []}
{"query_id": "q-en-rust-7de2dab580f301c8ce6b2809875c28dcb5912dccad192bd07b5e03d66bc6bf9a", "query": "() Reproduces on the playground using . label +D-incorrect +F-gen_blocks +requires-nightly", "positive_passages": [{"docid": "doc-en-rust-bfc1eabfac7834a3e820eced388b8602562a46673f8aee8c6701f0f9a76589c5", "text": "ast_passes_bound_in_context = bounds on `type`s in {$ctx} have no effect ast_passes_const_and_async = functions cannot be both `const` and `async` .const = `const` because of this .async = `async` because of this .label = {\"\"} ast_passes_const_and_c_variadic = functions cannot be both `const` and C-variadic .const = `const` because of this .variadic = C-variadic because of this ast_passes_const_and_coroutine = functions cannot be both `const` and `{$coroutine_kind}` .const = `const` because of this .coroutine = `{$coroutine_kind}` because of this .label = {\"\"} ast_passes_const_bound_trait_object = const trait bounds are not allowed in trait object types ast_passes_const_without_body =", "commid": "rust_pr_130252"}], "negative_passages": []}
{"query_id": "q-en-rust-7de2dab580f301c8ce6b2809875c28dcb5912dccad192bd07b5e03d66bc6bf9a", "query": "() Reproduces on the playground using . label +D-incorrect +F-gen_blocks +requires-nightly", "positive_passages": [{"docid": "doc-en-rust-1dbb75c1a7a05e588bd321a5a3aa81d5201d64f51e69e9fdee68c24fcead2f51", "text": "// Functions cannot both be `const async` or `const gen` if let Some(&FnHeader { constness: Const::Yes(cspan), constness: Const::Yes(const_span), coroutine_kind: Some(coroutine_kind), .. }) = fk.header() { let aspan = match coroutine_kind { CoroutineKind::Async { span: aspan, .. } | CoroutineKind::Gen { span: aspan, .. } | CoroutineKind::AsyncGen { span: aspan, .. } => aspan, }; // FIXME(gen_blocks): Report a different error for `const gen` self.dcx().emit_err(errors::ConstAndAsync { spans: vec![cspan, aspan], cspan, aspan, self.dcx().emit_err(errors::ConstAndCoroutine { spans: vec![coroutine_kind.span(), const_span], const_span, coroutine_span: coroutine_kind.span(), coroutine_kind: coroutine_kind.as_str(), span, }); }", "commid": "rust_pr_130252"}], "negative_passages": []}
{"query_id": "q-en-rust-7de2dab580f301c8ce6b2809875c28dcb5912dccad192bd07b5e03d66bc6bf9a", "query": "() Reproduces on the playground using . label +D-incorrect +F-gen_blocks +requires-nightly", "positive_passages": [{"docid": "doc-en-rust-1095c86314a09036287669a20080bc8a574823c01c9e364c4848fc94b1c106c7", "text": "} #[derive(Diagnostic)] #[diag(ast_passes_const_and_async)] pub(crate) struct ConstAndAsync { #[diag(ast_passes_const_and_coroutine)] pub(crate) struct ConstAndCoroutine { #[primary_span] pub spans: Vec, #[label(ast_passes_const)] pub cspan: Span, #[label(ast_passes_async)] pub aspan: Span, pub const_span: Span, #[label(ast_passes_coroutine)] pub coroutine_span: Span, #[label] pub span: Span, pub coroutine_kind: &'static str, } #[derive(Diagnostic)]", "commid": "rust_pr_130252"}], "negative_passages": []}
{"query_id": "q-en-rust-7de2dab580f301c8ce6b2809875c28dcb5912dccad192bd07b5e03d66bc6bf9a", "query": "() Reproduces on the playground using . label +D-incorrect +F-gen_blocks +requires-nightly", "positive_passages": [{"docid": "doc-en-rust-6dd5ca973f3aaa9fa3c0cc4f59386cd990f3525a0e029376e9223ef5ddd3923f", "text": ") => { eq_closure_binder(lb, rb) && lc == rc && la.map_or(false, CoroutineKind::is_async) == ra.map_or(false, CoroutineKind::is_async) && eq_coroutine_kind(*la, *ra) && lm == rm && eq_fn_decl(lf, rf) && eq_expr(le, re)", "commid": "rust_pr_130252"}], "negative_passages": []}
{"query_id": "q-en-rust-7de2dab580f301c8ce6b2809875c28dcb5912dccad192bd07b5e03d66bc6bf9a", "query": "() Reproduces on the playground using . label +D-incorrect +F-gen_blocks +requires-nightly", "positive_passages": [{"docid": "doc-en-rust-70650facd78c1e6390bab57b8c75f2bc365c8c58abfdbf638bb4b8df02a564e3", "text": "} } fn eq_coroutine_kind(a: Option fail!(\"[{}] FormatMessage failure\", errno()); // Sometimes FormatMessageW can fail e.g. system doesn't like langId, let fm_err = errno(); return format!(\"OS Error {} (FormatMessageW() returned error {})\", err, fm_err); } str::from_utf16(str::truncate_utf16_at_nul(buf)) .expect(\"FormatMessageW returned invalid UTF-16\") let msg = str::from_utf16(str::truncate_utf16_at_nul(buf)); match msg { Some(msg) => format!(\"OS Error {}: {}\", err, msg), None => format!(\"OS Error {} (FormatMessageW() returned invalid UTF-16)\", err), } } }", "commid": "rust_pr_13078"}], "negative_passages": []}
{"query_id": "q-en-rust-f19e8c3b4cc91be92d062a245f376c4f67af088e4fe6f0297917e24bf214b064", "query": "The crater run for merged doctests () detected a large number of regressions due to tests examining the process arguments. (I don't have an exact number, but let's say ~50 projects.) Previously, there were no arguments, but now it gets arguments like which the user's CLI parsing code isn't expecting. For example, using something like will now fail. It seems to be fairly common to have objects whose default constructor will parse the arguments from the command line. I'm not sure if or how we should resolve that. One idea is to use a different mechanism for passing in the test information (like via an environment variable).\nI vaguely remember that using environment variables was discussed but can't find anything except mentioning it . Maybe it'd be a better approach indeed. Let me send a PR so we can make a crater run on it.", "positive_passages": [{"docid": "doc-en-rust-dfb26e0c8f1f756a927307678888fe4ab54c89ac492fa8aecc13579a95f54671", "text": "} else { cmd = Command::new(&output_file); if doctest.is_multiple_tests { cmd.arg(\"*doctest-bin-path\"); cmd.arg(&output_file); cmd.env(\"RUSTDOC_DOCTEST_BIN_PATH\", &output_file); } } if let Some(run_directory) = &rustdoc_options.test_run_directory {", "commid": "rust_pr_131095"}], "negative_passages": []}
{"query_id": "q-en-rust-f19e8c3b4cc91be92d062a245f376c4f67af088e4fe6f0297917e24bf214b064", "query": "The crater run for merged doctests () detected a large number of regressions due to tests examining the process arguments. (I don't have an exact number, but let's say ~50 projects.) Previously, there were no arguments, but now it gets arguments like which the user's CLI parsing code isn't expecting. For example, using something like will now fail. It seems to be fairly common to have objects whose default constructor will parse the arguments from the command line. I'm not sure if or how we should resolve that. One idea is to use a different mechanism for passing in the test information (like via an environment variable).\nI vaguely remember that using environment variables was discussed but can't find anything except mentioning it . Maybe it'd be a better approach indeed. Let me send a PR so we can make a crater run on it.", "positive_passages": [{"docid": "doc-en-rust-2ef5b5e2eb60692e1eab8ad0a38d69421c14112e6faef323a92f5fed30ddeff0", "text": "use std::path::PathBuf; pub static BINARY_PATH: OnceLock pub const RUN_OPTION: &str = \"*doctest-inner-test\"; pub const BIN_OPTION: &str = \"*doctest-bin-path\"; pub const RUN_OPTION: &str = \"RUSTDOC_DOCTEST_RUN_NB_TEST\"; #[allow(unused)] pub fn doctest_path() -> Option<&'static PathBuf> {{", "commid": "rust_pr_131095"}], "negative_passages": []}
{"query_id": "q-en-rust-f19e8c3b4cc91be92d062a245f376c4f67af088e4fe6f0297917e24bf214b064", "query": "The crater run for merged doctests () detected a large number of regressions due to tests examining the process arguments. (I don't have an exact number, but let's say ~50 projects.) Previously, there were no arguments, but now it gets arguments like which the user's CLI parsing code isn't expecting. For example, using something like will now fail. It seems to be fairly common to have objects whose default constructor will parse the arguments from the command line. I'm not sure if or how we should resolve that. One idea is to use a different mechanism for passing in the test information (like via an environment variable).\nI vaguely remember that using environment variables was discussed but can't find anything except mentioning it . Maybe it'd be a better approach indeed. Let me send a PR so we can make a crater run on it.", "positive_passages": [{"docid": "doc-en-rust-8b35b59110368adb05880f53682f667dce5c05bad77fb02e2f4e102240e92e0c", "text": "#[allow(unused)] pub fn doctest_runner(bin: &std::path::Path, test_nb: usize) -> Result<(), String> {{ let out = std::process::Command::new(bin) .arg(self::RUN_OPTION) .arg(test_nb.to_string()) .env(self::RUN_OPTION, test_nb.to_string()) .args(std::env::args().skip(1).collect:: let bin_marker = std::ffi::OsStr::new(__doctest_mod::BIN_OPTION); let test_marker = std::ffi::OsStr::new(__doctest_mod::RUN_OPTION); let test_args = &[{test_args}]; const ENV_BIN: &'static str = \"RUSTDOC_DOCTEST_BIN_PATH\"; let mut args = std::env::args_os().skip(1); while let Some(arg) = args.next() {{ if arg == bin_marker {{ let Some(binary) = args.next() else {{ panic!(\"missing argument after `{{}}`\", __doctest_mod::BIN_OPTION); }}; if crate::__doctest_mod::BINARY_PATH.set(binary.into()).is_err() {{ panic!(\"`{{}}` option was used more than once\", bin_marker.to_string_lossy()); }} return std::process::Termination::report(test::test_main(test_args, Vec::from(TESTS), None)); }} else if arg == test_marker {{ let Some(nb_test) = args.next() else {{ panic!(\"missing argument after `{{}}`\", __doctest_mod::RUN_OPTION); }}; if let Some(nb_test) = nb_test.to_str().and_then(|nb| nb.parse:: if let Ok(binary) = std::env::var(ENV_BIN) {{ let _ = crate::__doctest_mod::BINARY_PATH.set(binary.into()); unsafe {{ std::env::remove_var(ENV_BIN); }} return std::process::Termination::report(test::test_main(test_args, Vec::from(TESTS), None)); }} else if let Ok(nb_test) = std::env::var(__doctest_mod::RUN_OPTION) {{ if let Ok(nb_test) = nb_test.parse:: panic!(\"Unexpected value after `{{}}`\", __doctest_mod::RUN_OPTION); }} panic!(\"Unexpected value for `{{}}`\", __doctest_mod::RUN_OPTION); }} eprintln!(\"WARNING: No argument provided so doctests will be run in the same process\"); eprintln!(\"WARNING: No rustdoc doctest environment variable provided so doctests will be run in the same process\"); std::process::Termination::report(test::test_main(test_args, Vec::from(TESTS), None)) }}\", nb_tests = self.nb_tests,", "commid": "rust_pr_131095"}], "negative_passages": []}
{"query_id": "q-en-rust-654bffdfcd4ae0374e4069afb71f236b1477555682e2524b014ef184d6eaf9c2", "query": "In // If the output is piped to e.g. `head -n1` we want the process to be killed, // rather than having an error bubble up and cause a panic. cargo.rustflag(\"-Zon-broken-pipe=kill\"); // Use an untracked env var `FORCE_ON_BROKEN_PIPE_KILL` here instead of `RUSTFLAGS`. // `RUSTFLAGS` is tracked by cargo. Conditionally omitting `-Zon-broken-pipe=kill` from // `RUSTFLAGS` causes unnecessary tool rebuilds due to cache invalidation from building e.g. // cargo *without* `-Zon-broken-pipe=kill` but then rustdoc *with* `-Zon-broken-pipe=kill`. cargo.env(\"FORCE_ON_BROKEN_PIPE_KILL\", \"-Zon-broken-pipe=kill\"); } cargo", "commid": "rust_pr_131155"}], "negative_passages": []}
{"query_id": "q-en-rust-654bffdfcd4ae0374e4069afb71f236b1477555682e2524b014ef184d6eaf9c2", "query": "In use build_helper::git::warn_old_master_branch; use crate::Build; #[cfg(not(feature = \"bootstrap-self-test\"))] use crate::builder::Builder;", "commid": "rust_pr_131331"}], "negative_passages": []}
{"query_id": "q-en-rust-16c2411bfbebb5117e894176290cf661217a08a1a1574ac1ecbdebad0e504368", "query": "I am getting this on each invocation now, since very recently: This is in my main checkout, not in a worktree, so I am not quite sure what could even be so unusual about my setup that causes an error here. Cc\nA bisect points at That's I am surprised about the date of this PR, I think I would have noticed this earlier if this happened for a month... so probably some other factor is also involved. Cc\nIt seems to try to read which indeed is not a file that exists here. I also noticed this line in So maybe checking these files is just futile since git doesn't always use a file to track these refs?\nI think what happened is that has been run, and that indeed cleans up most of the files from . So the current approach in bootstrap of looking at these files is not reliable. IMO we should instead look at the commit date of the most recent commit in that branch. That also avoids having to directly mess with git's internal files.\nyeah, i'm gonna accept defeat on this one. is kinda slow on my device, but only if the relevant files aren't in the i/o cache, and building rust takes a good while anyways.\nis not the case anymore since . I am planning to revert and as they are not really required.\nSeems like it will still be the case if upstream was configured and too old. But that could be fixed in a much better way (e.g., we could ignore upstream commit and use merge commit in current branch) compare to and", "positive_passages": [{"docid": "doc-en-rust-7bec0cf1c2a0c0c784e1822de03ed9fdeef886e368f9afadd11fa2e2ae43668e", "text": "if let Some(ref s) = build.config.ccache { cmd_finder.must_have(s); } warn_old_master_branch(&build.config.git_config(), &build.config.src); }", "commid": "rust_pr_131331"}], "negative_passages": []}
{"query_id": "q-en-rust-16c2411bfbebb5117e894176290cf661217a08a1a1574ac1ecbdebad0e504368", "query": "I am getting this on each invocation now, since very recently: This is in my main checkout, not in a worktree, so I am not quite sure what could even be so unusual about my setup that causes an error here. Cc\nA bisect points at That's I am surprised about the date of this PR, I think I would have noticed this earlier if this happened for a month... so probably some other factor is also involved. Cc\nIt seems to try to read which indeed is not a file that exists here. I also noticed this line in So maybe checking these files is just futile since git doesn't always use a file to track these refs?\nI think what happened is that has been run, and that indeed cleans up most of the files from . So the current approach in bootstrap of looking at these files is not reliable. IMO we should instead look at the commit date of the most recent commit in that branch. That also avoids having to directly mess with git's internal files.\nyeah, i'm gonna accept defeat on this one. is kinda slow on my device, but only if the relevant files aren't in the i/o cache, and building rust takes a good while anyways.\nis not the case anymore since . I am planning to revert and as they are not really required.\nSeems like it will still be the case if upstream was configured and too old. But that could be fixed in a much better way (e.g., we could ignore upstream commit and use merge commit in current branch) compare to and", "positive_passages": [{"docid": "doc-en-rust-7a884f9b37a698604f9eee69d853d2595d6ce4e7fb0cc6ae553bcdd6de3b2b5e", "text": ".collect(); Ok(Some(files)) } /// Print a warning if the branch returned from `updated_master_branch` is old /// /// For certain configurations of git repository, this remote will not be /// updated when running `git pull`. /// /// This can result in formatting thousands of files instead of a dozen, /// so we should warn the user something is wrong. pub fn warn_old_master_branch(config: &GitConfig<'_>, git_dir: &Path) { if crate::ci::CiEnv::is_ci() { // this warning is useless in CI, // and CI probably won't have the right branches anyway. return; } // this will be overwritten by the actual name, if possible let mut updated_master = \"the upstream master branch\".to_string(); match warn_old_master_branch_(config, git_dir, &mut updated_master) { Ok(branch_is_old) => { if !branch_is_old { return; } // otherwise fall through and print the rest of the warning } Err(err) => { eprintln!(\"warning: unable to check if {updated_master} is old due to error: {err}\") } } eprintln!( \"warning: {updated_master} is used to determine if files have been modifiedn warning: if it is not updated, this may cause files to be needlessly reformatted\" ); } pub fn warn_old_master_branch_( config: &GitConfig<'_>, git_dir: &Path, updated_master: &mut String, ) -> Result", "commid": "rust_pr_131331"}], "negative_passages": []}
{"query_id": "q-en-rust-8c602d6d427338702ea29b613b9c1cb115f3120e9c6b612b73e96076292b0f25", "query": "It seems that the prefix is used more often than , so we should swap the function names before 0.1.\nWe're probably going the other direction -\nDone.", "positive_passages": [{"docid": "doc-en-rust-a5ba39b7bcf27c282d841d940fcb3e140f6de1687e3700749e0a3f5ed56524e0", "text": "}, Primitive::F32 => types::F32, Primitive::F64 => types::F64, Primitive::Pointer => pointer_ty(tcx), // FIXME(erikdesjardins): handle non-default addrspace ptr sizes Primitive::Pointer(_) => pointer_ty(tcx), } }", "commid": "rust_pr_107843"}], "negative_passages": []}
{"query_id": "q-en-rust-8c602d6d427338702ea29b613b9c1cb115f3120e9c6b612b73e96076292b0f25", "query": "It seems that the prefix is used more often than , so we should swap the function names before 0.1.\nWe're probably going the other direction -\nDone.", "positive_passages": [{"docid": "doc-en-rust-5aae347605bb4660140896e7509a3d620ae7490dc4ab5527751813c1d58b58a0", "text": "pub(crate) use cpuid::codegen_cpuid_call; pub(crate) use llvm::codegen_llvm_intrinsic_call; use rustc_middle::ty::layout::HasParamEnv; use rustc_middle::ty::print::with_no_trimmed_paths; use rustc_middle::ty::subst::SubstsRef; use rustc_span::symbol::{kw, sym, Symbol};", "commid": "rust_pr_107843"}], "negative_passages": []}
{"query_id": "q-en-rust-8c602d6d427338702ea29b613b9c1cb115f3120e9c6b612b73e96076292b0f25", "query": "It seems that the prefix is used more often than , so we should swap the function names before 0.1.\nWe're probably going the other direction -\nDone.", "positive_passages": [{"docid": "doc-en-rust-528d627c63f00b8fcaddc0425a6cc6641358718e683d5ec9eca2f3441a3ab4e8", "text": "return; } if intrinsic == sym::assert_zero_valid && !fx.tcx.permits_zero_init(layout) { if intrinsic == sym::assert_zero_valid && !fx.tcx.permits_zero_init(fx.param_env().and(layout)) { with_no_trimmed_paths!({ crate::base::codegen_panic( fx,", "commid": "rust_pr_107843"}], "negative_passages": []}
{"query_id": "q-en-rust-8c602d6d427338702ea29b613b9c1cb115f3120e9c6b612b73e96076292b0f25", "query": "It seems that the prefix is used more often than , so we should swap the function names before 0.1.\nWe're probably going the other direction -\nDone.", "positive_passages": [{"docid": "doc-en-rust-1bce2b20d247fab559d4607d9cf57493df300b13b0bc5b81ccd0b4208efe195e", "text": "} if intrinsic == sym::assert_mem_uninitialized_valid && !fx.tcx.permits_uninit_init(layout) && !fx.tcx.permits_uninit_init(fx.param_env().and(layout)) { with_no_trimmed_paths!({ crate::base::codegen_panic(", "commid": "rust_pr_107843"}], "negative_passages": []}
{"query_id": "q-en-rust-8c602d6d427338702ea29b613b9c1cb115f3120e9c6b612b73e96076292b0f25", "query": "It seems that the prefix is used more often than , so we should swap the function names before 0.1.\nWe're probably going the other direction -\nDone.", "positive_passages": [{"docid": "doc-en-rust-8659227658fbfb5aa409e123898d07bb1956bfea9c6d555bca9f36a6e51bcaa6", "text": "is_main_fn: bool, sigpipe: u8, ) { let main_ret_ty = tcx.fn_sig(rust_main_def_id).output(); let main_ret_ty = tcx.bound_fn_sig(rust_main_def_id).subst_identity().output(); // Given that `main()` has no arguments, // then its return type cannot have // late-bound regions, since late-bound", "commid": "rust_pr_107843"}], "negative_passages": []}
{"query_id": "q-en-rust-1d0a44fcb334d632818a76b77ed647afef0351f397f3ea10fa55c8952e10b234", "query": "Yes, really\nIt looks like this is 2640 bytes of lovecraft quotes in each exectuable (out of , 0.56%). Perhaps it's not so bad?\n2k is pretty bad, because a hello world binary should be ~10k at most. It's not the lowest hanging fruit right now but it will become a bigger fish as we fix the other problems. I think it's an indication of a larger issue rather than being a problem itself though...\nI think there is no defensible reason to have that text in every executable.\nAnd thus poetry was killed...\nNot particularly new - and\n2K? So that's what 2014 is about?\nIf you want Rust to be taken seriously as a systems language, you have to worry about that. It's not acceptable in many embedded environments for binaries to be this large. It makes the language unusable without writing the entire standard library again. It's fine for it to be in the standard libraries... just not every binary linking against them.\nShould be fine until other sources of bloat are removed.\nThe problem here isn't the quotes, it's that the code containing the quotes is used or considered used at all. It's the canary in the coal mine. I'm sure we can cut down the runtime overhead to something much smaller, including getting rid of this.\nIt's not unreasonable for it to be considered used, considering it's part of the abort code, so even a trivial program might trigger it (e.g. on OOM, stack corruption, etc). Personally I'd add a no_horrors cfg option or something to disable it in the rare cases where 2k is considered important, unless the potential for user confusion is considered to great to keep the behaviour at all.\nIt's not rare for people to compare the size of the binaries to C and decide against using Rust for their use case due to it being very unfavourable. You shouldn't have to compile Rust again to disable debugging features useful only to the developers. This is only the runtime abort code, it's not the generic which isn't even a function call. Runtime errors should report real errors, and logic errors should be dealt with by improving the code so redundant checks aren't necessary in the normal release builds. It can remain around in a build that's not for regular end users... the code paths calling this just need some work to remove the need for debugging stuff.\nYeah but I love it though :( Hehe.\nI also love it, but it's more important to be professional and chuck it out. Let's not make this Rust's equivalent to TPAAMAYIMNEKUDOTAYIM. Easter eggs are fun, but they have to have literally zero impact on users.\nThe key question isn't \"is 2K big or small\". The key question is \"what value does this add\".\nKeeping these as default isn't very professional for many reasons, but I think value would be lost if lightheartedness was removed from Rust as a continuing policy. I've been amazed at how many people become interested in the language after I link them the abort in question. A human touch to the compiler output goes a long way. How about a '-fun' flag that enables easter eggs being compiled in?\nSounds a bit ridiculous tbh. The point of an easter egg is to have it everywhere, not to compile it purposefully.\nPerhaps, but I'd argue the main point of an easter egg is to be entertainingly surprising. A flag enabling them wouldn't give away when they happen, just discourage unpleasant surprises. for a compiler. Another option would be to strip the messages when optimizations are enabled. -O equals -nofun.\nThat's funny, wordy and mawkish. \"what value does this add\"? Nothing. I'd vote +1 to remove them entirely.\nBesides Lovecraft, I see tons of text obviously related to assertion failures and references to my build directory. I assume there is a lot of debugging code left in the compiler and libraries that will eventually get turned off in a production compiler, yes?\nAm I the only one that's going to bring up copyright issues? I work for a company that thoroughly reviews code with lawyers that will ban any and all uses of software that are even questionable in terms of their license or copyright issues. You might claim embedding these quotes (and without attribution) is fair use etc, but all it takes is for one of my company's lawyers to disagree with your conclusion and suddenly Rust is banned from use in the company. I understand and appreciate the value of having some fun with a project, but these quotes: unnecessary overhead to executable size and memory (and yes, in real life I work on embedded systems where a few k can make a difference). potential users because a zealous lawyer has concerns regarding the quotes and bans Rust \"just to be safe\".\nLovecraft's works are basically public domain, it's not a problem.\nTo be fair, pretty sure copyright is the wrong case to make. Lovecraft's works are in the public domain. Edit: Dang, Steve beat me to it.\nWon't the embedded environments you speak of use libcore?\nTrue, but (but I probably should have been more specific because this issue specifically mention's Lovecraft's quotes and doesn't mention Zelda quotes). I know I'm being a bit paranoid here when I bring up copyright stuff, but I'm not kidding when I mention zealous lawyers laying down strict rules...\nThis is only the opinion of a user on the outside looking in, but I'm not sure there's any sensible reason for this being in-- at best it's a waste of space and code, and at worst having Lovecraft spouted during an could be seen by the user on the receiving end as being in poor taste. (Speaking from experience a month or two ago, when a build of ed after ages of tring to compile it)\nI think that it should remain in. It sort of reminds people who stumble upon this that has indeed been built by real people :-) I also don't think that it is somehow \"unprofessional\", every program I ever came across has some sort of an easter egg in it,including Git, (starting with its name). Having this compile only with a certain opt in flag kind of defeats the purpose, but maybe a flag that can explicitly opt out of this kind of thing is worth having for embedded developers. There's no sensible reason apart from having some fun, I think that's enough of a reason, but opinions will differ.\nIt's not part of , it's part of the runtime and is included in every Rust program without . An easter egg like this in is different than putting it in every Rust program.\nI suggest its removal simply be the christening commit of 1.0-stable.\nyesssss\nTo those of you saying \"2k, in 2014?\" I will point out that there are a lot of platforms out there where 2k is still a big deal. If Rust is going to work for embedded systems work, you can't pull tricks like this.\nthose would not be using Rust's runtime anyway, and so would not have this in it.\nWhy should we have poetry in a binary executable generated by a supposedly serious, non-esoteric programming language? One would think this wouldn't even need to be a discussion...........\nthat is not necessarily true. As I understand, the runtime is getting slimmer and slimmer with every RFC.\naye. To my tastes, this sort of easter egg belongs in the compiler/toolchain, not a statically linked stdlib.\nTrue, it probably shouldn't be part of the runtime.\nAn embedded gag that only comes up when the program is frustrating the user with a catastrophic error?\nHonestly, as someone outside the Rust community, seeing this puts me off using the language. I'm a Lovecraft fan, and if this was an easter egg in the toolchain I would love it, but embedding this statically into all the binaries I ship to end users is a step too far. There's a lot of negative interpretations a snooping end-user could get from finding these quotes talking about things dying and gods in the binaries (and running on a binary isn't exactly difficult). That negative interpretation would fall on me, who they see as the developer, not the Rust team, who they probably don't even know exists. Rust is a tool. If my table saw has a tiny inscription on the underside with the name of the engineer who designed it - that's neat, a fun homage, and overall great. If my table saw contains a tiny router that engraves that engineer's name onto every piece of wood I cut with it - not so great. It degrades the quality of the tool. While I appreciate the entertainment value of the quotes, I think moving them into a developer tool like the Rust compiler would maintain that entertainment while also not degrading the quality of Rust by inserting unwanted data in generated binaries.\nThat's an incredibly fitting analogy. Here are a complete outsider's 2 cents: why not change instead to display a funny (or lighthearted) quote when something goes wrong, to sort of alleviate the pain of having your compilation fail? Of course, it shouldn't get in the way of you solving the issue (or it would be even worse), but if it manages to have some \"decorative\" value, I think it would be a perfect middle-ground between getting rid of this easter egg entirely or moving it to a more sensible place.\nI am very much in favour of removing the quotes. Put the quotes somewhere where humans will read them, not machines", "positive_passages": [{"docid": "doc-en-rust-8a8d324a17b540a90c83e611e7ba00643e6b0d5c28b061e8598443ce13d6eba1", "text": "let _ = write!(&mut w, \"{}\", args); let msg = str::from_utf8(&w.buf[0..w.pos]).unwrap_or(\"aborted\"); let msg = if msg.is_empty() {\"aborted\"} else {msg}; // Give some context to the message let hash = msg.bytes().fold(0, |accum, val| accum + (val as uint) ); let quote = match hash % 10 { 0 => \" It was from the artists and poets that the pertinent answers came, and I know that panic would have broken loose had they been able to compare notes. As it was, lacking their original letters, I half suspected the compiler of having asked leading questions, or of having edited the correspondence in corroboration of what he had latently resolved to see.\", 1 => \" There are not many persons who know what wonders are opened to them in the stories and visions of their youth; for when as children we listen and dream, we think but half-formed thoughts, and when as men we try to remember, we are dulled and prosaic with the poison of life. But some of us awake in the night with strange phantasms of enchanted hills and gardens, of fountains that sing in the sun, of golden cliffs overhanging murmuring seas, of plains that stretch down to sleeping cities of bronze and stone, and of shadowy companies of heroes that ride caparisoned white horses along the edges of thick forests; and then we know that we have looked back through the ivory gates into that world of wonder which was ours before we were wise and unhappy.\", 2 => \" Instead of the poems I had hoped for, there came only a shuddering blackness and ineffable loneliness; and I saw at last a fearful truth which no one had ever dared to breathe before \u2014 the unwhisperable secret of secrets \u2014 The fact that this city of stone and stridor is not a sentient perpetuation of Old New York as London is of Old London and Paris of Old Paris, but that it is in fact quite dead, its sprawling body imperfectly embalmed and infested with queer animate things which have nothing to do with it as it was in life.\", 3 => \" The ocean ate the last of the land and poured into the smoking gulf, thereby giving up all it had ever conquered. From the new-flooded lands it flowed again, uncovering death and decay; and from its ancient and immemorial bed it trickled loathsomely, uncovering nighted secrets of the years when Time was young and the gods unborn. Above the waves rose weedy remembered spires. The moon laid pale lilies of light on dead London, and Paris stood up from its damp grave to be sanctified with star-dust. Then rose spires and monoliths that were weedy but not remembered; terrible spires and monoliths of lands that men never knew were lands...\", 4 => \" There was a night when winds from unknown spaces whirled us irresistibly into limitless vacuum beyond all thought and entity. Perceptions of the most maddeningly untransmissible sort thronged upon us; perceptions of infinity which at the time convulsed us with joy, yet which are now partly lost to my memory and partly incapable of presentation to others.\", _ => \"You've met with a terrible fate, haven't you?\" }; rterrln!(\"{}\", \"\"); rterrln!(\"{}\", quote); rterrln!(\"{}\", \"\"); rterrln!(\"fatal runtime error: {}\", msg); unsafe { intrinsics::abort(); } }", "commid": "rust_pr_20944"}], "negative_passages": []}
{"query_id": "q-en-rust-75f48a827d240fd7005de6013e817df4841e8ab17224bcb8f5ab5f05ec71ea5e", "query": "See I propose to change the license of Non trivial contributors: OK OK OK (MoCo employee) Contributors appearing in git blame that are trivial (trivial update due to language changes): OK (MoCo employee) OK OK (MoCo employee) OK OK OK OK OK OK (sent agreement to by email) Contributors with removed and trivial contributions are not considered. For the persons that are not in \"OK\" state, please respond to this issue saying: Then, I'll propose the corresponding PR. Thanks.\nI agree to relicense any previous contributions to according to the term of the computer Language Benchmarks Game license (\nWe don't need statements from Graydon or Marijn as they both contributed as employees. On Jun 7, 2014 5:50 AM, \"TeXitoi\" wrote:\nI agree to relicense any previous contributions to according to the term of the Computer Language Benchmarks Game license ()\nOp updated. could you please respond?\nCiting in a private email: can I consider the contribution of negligible? the same for renaming? If yes I can propose a PR for this and for\nguh, I suppose so, though I'm disappointed that didn't cooperate.\nMy change is obviously trivial and I have zero copyright over anything to do with the shootout benchmarks, but if you must, then: I agree to relicense any previous contributions to according to the term of the Computer Language Benchmarks Game license ()\nOP updated. Thanks. I'll wait some days to see if respond to my email.\nI agree to relicense any previous contributions to according to the term of the Computer Language Benchmarks Game license ()", "positive_passages": [{"docid": "doc-en-rust-8edeae1330276a8770077e003ad1cbba64a554bbc70d490031becfe19c09fbc5", "text": "\"libstd/sync/spsc_queue.rs\", # BSD \"libstd/sync/mpmc_bounded_queue.rs\", # BSD \"libsync/mpsc_intrusive.rs\", # BSD \"test/bench/shootout-binarytrees.rs\", # BSD \"test/bench/shootout-fannkuch-redux.rs\", # BSD \"test/bench/shootout-meteor.rs\", # BSD \"test/bench/shootout-regex-dna.rs\", # BSD", "commid": "rust_pr_14855"}], "negative_passages": []}
{"query_id": "q-en-rust-75f48a827d240fd7005de6013e817df4841e8ab17224bcb8f5ab5f05ec71ea5e", "query": "See I propose to change the license of Non trivial contributors: OK OK OK (MoCo employee) Contributors appearing in git blame that are trivial (trivial update due to language changes): OK (MoCo employee) OK OK (MoCo employee) OK OK OK OK OK OK (sent agreement to by email) Contributors with removed and trivial contributions are not considered. For the persons that are not in \"OK\" state, please respond to this issue saying: Then, I'll propose the corresponding PR. Thanks.\nI agree to relicense any previous contributions to according to the term of the computer Language Benchmarks Game license (\nWe don't need statements from Graydon or Marijn as they both contributed as employees. On Jun 7, 2014 5:50 AM, \"TeXitoi\" wrote:\nI agree to relicense any previous contributions to according to the term of the Computer Language Benchmarks Game license ()\nOp updated. could you please respond?\nCiting in a private email: can I consider the contribution of negligible? the same for renaming? If yes I can propose a PR for this and for\nguh, I suppose so, though I'm disappointed that didn't cooperate.\nMy change is obviously trivial and I have zero copyright over anything to do with the shootout benchmarks, but if you must, then: I agree to relicense any previous contributions to according to the term of the Computer Language Benchmarks Game license ()\nOP updated. Thanks. I'll wait some days to see if respond to my email.\nI agree to relicense any previous contributions to according to the term of the Computer Language Benchmarks Game license ()", "positive_passages": [{"docid": "doc-en-rust-ee0db9f6886de7a6b863a18f98de10abd6e39a9f794c5d6478793c9ca7fa21a7", "text": " // Copyright 2012-2013 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // The Computer Language Benchmarks Game // http://benchmarksgame.alioth.debian.org/ // // Licensed under the Apache License, Version 2.0 // contributed by the Rust Project Developers // Copyright (c) 2012-2014 The Rust Project Developers // // All rights reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions // are met: // // - Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // // - Redistributions in binary form must reproduce the above copyright // notice, this list of conditions and the following disclaimer in // the documentation and/or other materials provided with the // distribution. // // - Neither the name of \"The Computer Language Benchmarks Game\" nor // the name of \"The Computer Language Shootout Benchmarks\" nor the // names of its contributors may be used to endorse or promote // products derived from this software without specific prior // written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // \"AS IS\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS // FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE // COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, // INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES // (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR // SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) // HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, // STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) // ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED // OF THE POSSIBILITY OF SUCH DAMAGE. extern crate sync; extern crate arena;", "commid": "rust_pr_14855"}], "negative_passages": []}
{"query_id": "q-en-rust-6f04471aa769edcf2ac894521aed5ab2886f2f87af17ee8420798ac5cf721630", "query": "Currently, pointers stored on the stack aren't word-aligned. This is a critical issue because it causes stack growth to fail most of the time.\nCommit disables task growth for now until this is fixed.\nWONTFIX (moot point given disabled stack growth)", "positive_passages": [{"docid": "doc-en-rust-f30a4a313f877b3bbcc8c6bf08f0e468ccaa46514db69c115a3d06715268e1fc", "text": "rustup toolchain install --profile minimal nightly-${TOOLCHAIN} # Sanity check to see if the nightly exists echo nightly-${TOOLCHAIN} > rust-toolchain echo \"=> Uninstalling all old nighlies\" echo \"=> Uninstalling all old nightlies\" for nightly in $(rustup toolchain list | grep nightly | grep -v $TOOLCHAIN | grep -v nightly-x86_64); do rustup toolchain uninstall $nightly done", "commid": "rust_pr_97887"}], "negative_passages": []}
{"query_id": "q-en-rust-e7a9e1f777d5a32db801c9b16916b12385516018cdf0d625ce8b9cd38a3ad308", "query": "(sorry about the bad title but I really have no clue what's going on here!) Compiles and prints . I think it's getting at the of the inside a . edit: As one might expect, does not compile.\nSeems rather bad. cc\nNominating.\nThis is a fun bug, it's by far not limited to the operator. I tried to reduce the test case a bit, starting with a new non-operator trait, changed it to a static method that just returns for inference, and...\ncc me\nAssigning 1.0, P-backcompat-lang.\nEven smaller:", "positive_passages": [{"docid": "doc-en-rust-0de57ad815da55c60f29375af01abc3adcabb776ac8c583d7f2953eaa398b8f2", "text": "} } (&ty::ty_param(ref a_p), &ty::ty_param(ref b_p)) if a_p.idx == b_p.idx => { (&ty::ty_param(ref a_p), &ty::ty_param(ref b_p)) if a_p.idx == b_p.idx && a_p.space == b_p.space => { Ok(a) }", "commid": "rust_pr_15356"}], "negative_passages": []}
{"query_id": "q-en-rust-e7a9e1f777d5a32db801c9b16916b12385516018cdf0d625ce8b9cd38a3ad308", "query": "(sorry about the bad title but I really have no clue what's going on here!) Compiles and prints . I think it's getting at the of the inside a . edit: As one might expect, does not compile.\nSeems rather bad. cc\nNominating.\nThis is a fun bug, it's by far not limited to the operator. I tried to reduce the test case a bit, starting with a new non-operator trait, changed it to a static method that just returns for inference, and...\ncc me\nAssigning 1.0, P-backcompat-lang.\nEven smaller:", "positive_passages": [{"docid": "doc-en-rust-287f506bfb853e1eed85a3bfdd7ab8ae4fee680699f45b24adbf865c20957ec7", "text": " // Copyright 2014 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0 \"${CFG_PREFIX}/bin/rustc\" --version > /dev/null \"${CFG_PREFIX}/bin/rustc\" --version 2> /dev/null 1> /dev/null if [ $? -ne 0 ] then ERR=\"can't execute installed rustc binary. \" ERR=\"${ERR}installation may be broken. \" ERR=\"${ERR}if this is expected then rerun install.sh with `--disable-verify` \" ERR=\"${ERR}or `make install` with `--disable-verify-install`\" err \"${ERR}\" export $CFG_LD_PATH_VAR=\"${CFG_PREFIX}/lib\":$CFG_OLD_LD_PATH_VAR \"${CFG_PREFIX}/bin/rustc\" --version > /dev/null if [ $? -ne 0 ] then ERR=\"can't execute installed rustc binary. \" ERR=\"${ERR}installation may be broken. \" ERR=\"${ERR}if this is expected then rerun install.sh with `--disable-verify` \" ERR=\"${ERR}or `make install` with `--disable-verify-install`\" err \"${ERR}\" else echo echo \" please ensure '${CFG_PREFIX}/lib' is added to ${CFG_LD_PATH_VAR}\" echo fi fi fi", "commid": "rust_pr_15550"}], "negative_passages": []}
{"query_id": "q-en-rust-e7d5c4cff4e3956d56aa6d2f4d95d22c14f61bd4f17ed05e5d4264aafe72e806", "query": "At this point, and both exist. Just in case it matters: my prefix is .\nFallout from , perhaps? This isn\u2019t the first time we\u2019ve had completely broken (two months ago it happened, ). Why don\u2019t we add it to the tests done on the buildbot?\nIndeed, this was predicted as a known consequence of . It may be the case that asking people to set is too much. We might need/want wrapper scripts to drive that set up such environment variables, much like other projects have.\nFixed by .\n(see comment , since it is relevant to this issue.)", "positive_passages": [{"docid": "doc-en-rust-c23d4e10ab6c0b51097190df93aa74ad9bfe719c59eb2a5ff34a3500fe1f1f28", "text": "TTNonterminal(Span, Ident) } /// Matchers are nodes defined-by and recognized-by the main rust parser and /// language, but they're only ever found inside syntax-extension invocations; /// indeed, the only thing that ever _activates_ the rules in the rust parser /// for parsing a matcher is a matcher looking for the 'matchers' nonterminal /// itself. Matchers represent a small sub-language for pattern-matching /// token-trees, and are thus primarily used by the macro-defining extension /// itself. /// /// MatchTok /// -------- /// /// A matcher that matches a single token, denoted by the token itself. So /// long as there's no $ involved. /// /// /// MatchSeq /// -------- /// /// A matcher that matches a sequence of sub-matchers, denoted various /// possible ways: /// /// $(M)* zero or more Ms /// $(M)+ one or more Ms /// $(M),+ one or more comma-separated Ms /// $(A B C);* zero or more semi-separated 'A B C' seqs /// /// /// MatchNonterminal /// ----------------- /// /// A matcher that matches one of a few interesting named rust /// nonterminals, such as types, expressions, items, or raw token-trees. A /// black-box matcher on expr, for example, binds an expr to a given ident, /// and that ident can re-occur as an interpolation in the RHS of a /// macro-by-example rule. For example: /// /// $foo:expr => 1 + $foo // interpolate an expr /// $foo:tt => $foo // interpolate a token-tree /// $foo:tt => bar! $foo // only other valid interpolation /// // is in arg position for another /// // macro /// /// As a final, horrifying aside, note that macro-by-example's input is /// also matched by one of these matchers. Holy self-referential! It is matched /// by a MatchSeq, specifically this one: /// /// $( $lhs:matchers => $rhs:tt );+ /// /// If you understand that, you have closed the loop and understand the whole /// macro system. Congratulations. // Matchers are nodes defined-by and recognized-by the main rust parser and // language, but they're only ever found inside syntax-extension invocations; // indeed, the only thing that ever _activates_ the rules in the rust parser // for parsing a matcher is a matcher looking for the 'matchers' nonterminal // itself. Matchers represent a small sub-language for pattern-matching // token-trees, and are thus primarily used by the macro-defining extension // itself. // // MatchTok // -------- // // A matcher that matches a single token, denoted by the token itself. So // long as there's no $ involved. // // // MatchSeq // -------- // // A matcher that matches a sequence of sub-matchers, denoted various // possible ways: // // $(M)* zero or more Ms // $(M)+ one or more Ms // $(M),+ one or more comma-separated Ms // $(A B C);* zero or more semi-separated 'A B C' seqs // // // MatchNonterminal // ----------------- // // A matcher that matches one of a few interesting named rust // nonterminals, such as types, expressions, items, or raw token-trees. A // black-box matcher on expr, for example, binds an expr to a given ident, // and that ident can re-occur as an interpolation in the RHS of a // macro-by-example rule. For example: // // $foo:expr => 1 + $foo // interpolate an expr // $foo:tt => $foo // interpolate a token-tree // $foo:tt => bar! $foo // only other valid interpolation // // is in arg position for another // // macro // // As a final, horrifying aside, note that macro-by-example's input is // also matched by one of these matchers. Holy self-referential! It is matched // by a MatchSeq, specifically this one: // // $( $lhs:matchers => $rhs:tt );+ // // If you understand that, you have closed the loop and understand the whole // macro system. Congratulations. pub type Matcher = Spanned echo \"=> Uninstalling all old nighlies\" echo \"=> Uninstalling all old nightlies\" for nightly in $(rustup toolchain list | grep nightly | grep -v $TOOLCHAIN | grep -v nightly-x86_64); do rustup toolchain uninstall $nightly done", "commid": "rust_pr_97887"}], "negative_passages": []}
{"query_id": "q-en-rust-2bef1c848f9e8900962cfff266d5341880ab73627fd8865536de623b9538e9a8", "query": "fails with vector v.\nWONTFIX (not required for bootstrapping, reopen if somehow this re-emerges in rustc)", "positive_passages": [{"docid": "doc-en-rust-f30a4a313f877b3bbcc8c6bf08f0e468ccaa46514db69c115a3d06715268e1fc", "text": "rustup toolchain install --profile minimal nightly-${TOOLCHAIN} # Sanity check to see if the nightly exists echo nightly-${TOOLCHAIN} > rust-toolchain echo \"=> Uninstalling all old nighlies\" echo \"=> Uninstalling all old nightlies\" for nightly in $(rustup toolchain list | grep nightly | grep -v $TOOLCHAIN | grep -v nightly-x86_64); do rustup toolchain uninstall $nightly done", "commid": "rust_pr_97887"}], "negative_passages": []}
{"query_id": "q-en-rust-72d4c72ff7db52a00a503bf5c52b2971949231687f668c77eb806ffbe5f15c17", "query": "On the pull request for (), had a good suggestion for what I think would make a good additional function: \"a version of frombytes which doesn't fail on invalid input but rather offers some sort of error recovery options. Like maybe it can return the partly converted string with an index to the invalid position. Or perhaps, a callback function to handle the error, etc.\" At a glance the easy way to do this would be with a callback to, e.g., drop, replace, or coerce offending sequences into UTF-8 or else return a ... But is this a recipe for complex callbacks? Perhaps providing sensible callback functions like , , , and so on would make this simple enough to use. Maybe those should just be simple functions for whole byte vector conversion, anyways. What do you think?\nI don't have a strong opinion about it, but I guess I would prefer to wait until somebody actually needs something like this.\nNot RFC-level proposal. Just a library wishlist item. Might accept a patch; but it seems a bit overwrought. Anyone need such a thing yet?\nI think this is subsumed in ; we might well adopt a dynamic-handler mechanism while working through that bug. This would be a special case.", "positive_passages": [{"docid": "doc-en-rust-d36d32fd3d4345a3bf226c229abe0dae9016f24f81ce217e95a40d851aa2b4d6", "text": " Subproject commit 2751bdcef125468ea2ee006c11992cd1405aebe5 Subproject commit 34fca48ed284525b2f124bf93c51af36d6685492 ", "commid": "rust_pr_115761"}], "negative_passages": []}
{"query_id": "q-en-rust-72d4c72ff7db52a00a503bf5c52b2971949231687f668c77eb806ffbe5f15c17", "query": "On the pull request for (), had a good suggestion for what I think would make a good additional function: \"a version of frombytes which doesn't fail on invalid input but rather offers some sort of error recovery options. Like maybe it can return the partly converted string with an index to the invalid position. Or perhaps, a callback function to handle the error, etc.\" At a glance the easy way to do this would be with a callback to, e.g., drop, replace, or coerce offending sequences into UTF-8 or else return a ... But is this a recipe for complex callbacks? Perhaps providing sensible callback functions like , , , and so on would make this simple enough to use. Maybe those should just be simple functions for whole byte vector conversion, anyways. What do you think?\nI don't have a strong opinion about it, but I guess I would prefer to wait until somebody actually needs something like this.\nNot RFC-level proposal. Just a library wishlist item. Might accept a patch; but it seems a bit overwrought. Anyone need such a thing yet?\nI think this is subsumed in ; we might well adopt a dynamic-handler mechanism while working through that bug. This would be a special case.", "positive_passages": [{"docid": "doc-en-rust-4486230c5649bfb99f8b8b2a73149a6e84bc6ed0ad65fcb881fca7e59c882606", "text": " Subproject commit 388750b081c0893c275044d37203f97709e058ba Subproject commit e3f3af69dce71cd37a785bccb7e58449197d940c ", "commid": "rust_pr_115761"}], "negative_passages": []}
{"query_id": "q-en-rust-72d4c72ff7db52a00a503bf5c52b2971949231687f668c77eb806ffbe5f15c17", "query": "On the pull request for (), had a good suggestion for what I think would make a good additional function: \"a version of frombytes which doesn't fail on invalid input but rather offers some sort of error recovery options. Like maybe it can return the partly converted string with an index to the invalid position. Or perhaps, a callback function to handle the error, etc.\" At a glance the easy way to do this would be with a callback to, e.g., drop, replace, or coerce offending sequences into UTF-8 or else return a ... But is this a recipe for complex callbacks? Perhaps providing sensible callback functions like , , , and so on would make this simple enough to use. Maybe those should just be simple functions for whole byte vector conversion, anyways. What do you think?\nI don't have a strong opinion about it, but I guess I would prefer to wait until somebody actually needs something like this.\nNot RFC-level proposal. Just a library wishlist item. Might accept a patch; but it seems a bit overwrought. Anyone need such a thing yet?\nI think this is subsumed in ; we might well adopt a dynamic-handler mechanism while working through that bug. This would be a special case.", "positive_passages": [{"docid": "doc-en-rust-fc90d79d15ff6d2cf104695d5e4e15cfbf996a2a921b483b76249444bbd3e0ce", "text": " Subproject commit d43038932adeb16ada80e206d4c073d851298101 Subproject commit ee7c676fd6e287459cb407337652412c990686c0 ", "commid": "rust_pr_115761"}], "negative_passages": []}
{"query_id": "q-en-rust-72d4c72ff7db52a00a503bf5c52b2971949231687f668c77eb806ffbe5f15c17", "query": "On the pull request for (), had a good suggestion for what I think would make a good additional function: \"a version of frombytes which doesn't fail on invalid input but rather offers some sort of error recovery options. Like maybe it can return the partly converted string with an index to the invalid position. Or perhaps, a callback function to handle the error, etc.\" At a glance the easy way to do this would be with a callback to, e.g., drop, replace, or coerce offending sequences into UTF-8 or else return a ... But is this a recipe for complex callbacks? Perhaps providing sensible callback functions like , , , and so on would make this simple enough to use. Maybe those should just be simple functions for whole byte vector conversion, anyways. What do you think?\nI don't have a strong opinion about it, but I guess I would prefer to wait until somebody actually needs something like this.\nNot RFC-level proposal. Just a library wishlist item. Might accept a patch; but it seems a bit overwrought. Anyone need such a thing yet?\nI think this is subsumed in ; we might well adopt a dynamic-handler mechanism while working through that bug. This would be a special case.", "positive_passages": [{"docid": "doc-en-rust-b2a11c6984f4036649711b8c163467eea27f9c2d2d043354ba5e884f779a2a20", "text": " Subproject commit 07e0df2f006e59d171c6bf3cafa9d61dbeb520d8 Subproject commit c954202c1e1720cba5628f99543cc01188c7d6fc ", "commid": "rust_pr_115761"}], "negative_passages": []}
{"query_id": "q-en-rust-72d4c72ff7db52a00a503bf5c52b2971949231687f668c77eb806ffbe5f15c17", "query": "On the pull request for (), had a good suggestion for what I think would make a good additional function: \"a version of frombytes which doesn't fail on invalid input but rather offers some sort of error recovery options. Like maybe it can return the partly converted string with an index to the invalid position. Or perhaps, a callback function to handle the error, etc.\" At a glance the easy way to do this would be with a callback to, e.g., drop, replace, or coerce offending sequences into UTF-8 or else return a ... But is this a recipe for complex callbacks? Perhaps providing sensible callback functions like , , , and so on would make this simple enough to use. Maybe those should just be simple functions for whole byte vector conversion, anyways. What do you think?\nI don't have a strong opinion about it, but I guess I would prefer to wait until somebody actually needs something like this.\nNot RFC-level proposal. Just a library wishlist item. Might accept a patch; but it seems a bit overwrought. Anyone need such a thing yet?\nI think this is subsumed in ; we might well adopt a dynamic-handler mechanism while working through that bug. This would be a special case.", "positive_passages": [{"docid": "doc-en-rust-a5f5f68f24cab9f177c848c1dd6b9756981899e73e2c1635bc805746cfffb84b", "text": " Subproject commit b123ab4754127d822ffb38349ce0fbf561f1b2fd Subproject commit 08bb147d51e815b96e8db7ba4cf870f201c11ff8 ", "commid": "rust_pr_115761"}], "negative_passages": []}
{"query_id": "q-en-rust-43ccaabceba2b19f3a93776a48d06f56c6a694c2a2eaa20060ed54d6293da90e", "query": "The std docs should list: The version they were built with The build date They could be at the top or bottom of every page, or on a stand-alone page like About or Help. This may not be important on rust- where I think they are rebuilt with every push but it seems like a good idea. It might be more useful on other sites where it isn't rebuilt for every push.\nI don't think this is a problem with std itself, since the version is in the URL (or it's master, which was built in the past 24 hours).\nNot sure where you std docs URL doesn't list a version. I don't think it will be a problem for Rust long-term. Once stable, the address will probably be something like with version varying between versions.\nPartly a dupe of\nYes, this is basically a dup of /", "positive_passages": [{"docid": "doc-en-rust-19132abfb27af4f49ca30d8e2d0e18e3a621078a8576851670f89aa7943f27a1", "text": "**Source:** https://github.com/rust-lang/rust-analyzer/blob/master/crates/rust-analyzer/src/config.rs[config.rs] The <<_installation,Installation>> section contains details on configuration for some of the editors. The < use driver::session::Session; use middle::def::*; use middle::resolve; use middle::ty; use middle::typeck; use util::ppaux; use syntax::ast::*; use syntax::{ast_util, ast_map}; use syntax::ast_util; use syntax::visit::Visitor; use syntax::visit;", "commid": "rust_pr_17264"}], "negative_passages": []}
{"query_id": "q-en-rust-56e33a072d9aec3b63af4a16f4a25f8a93a155cfe3938f1a7297febf189c70df", "query": "Normally when a static variable has a recursive the definition, the compiler issues an error. However, if the static is used (and not just defined) rustc overflows its stack. This program causes a stack overflow: Stack trace:\nConstant checking -- which detects this recursion -- occurs after type checking in the phase order, but type checking needs to evaluate the constant expression in order to find the length of the array type. It looks like strategically inserting a call to in the appropriate place would fix this at the cost of potentially performing the check many times.\nAfter some chatting on IRC, it looks like the best way to fix this is to break the item recursion check out into a separate pass that comes before type checking. This is my first time looking at this part of the compiler, but I'll give it a shot.\nThis approach looks promising. I need to do a full build/test run and then I can submit a PR.\nFixed.", "positive_passages": [{"docid": "doc-en-rust-c58d041ecdb9c4a91b6fac4dd2886bbb4f36dda7349357790bc41f12973abacb", "text": "match it.node { ItemStatic(_, _, ref ex) => { v.inside_const(|v| v.visit_expr(&**ex)); check_item_recursion(&v.tcx.sess, &v.tcx.map, &v.tcx.def_map, it); } ItemEnum(ref enum_definition, _) => { for var in (*enum_definition).variants.iter() {", "commid": "rust_pr_17264"}], "negative_passages": []}
{"query_id": "q-en-rust-56e33a072d9aec3b63af4a16f4a25f8a93a155cfe3938f1a7297febf189c70df", "query": "Normally when a static variable has a recursive the definition, the compiler issues an error. However, if the static is used (and not just defined) rustc overflows its stack. This program causes a stack overflow: Stack trace:\nConstant checking -- which detects this recursion -- occurs after type checking in the phase order, but type checking needs to evaluate the constant expression in order to find the length of the array type. It looks like strategically inserting a call to in the appropriate place would fix this at the cost of potentially performing the check many times.\nAfter some chatting on IRC, it looks like the best way to fix this is to break the item recursion check out into a separate pass that comes before type checking. This is my first time looking at this part of the compiler, but I'll give it a shot.\nThis approach looks promising. I need to do a full build/test run and then I can submit a PR.\nFixed.", "positive_passages": [{"docid": "doc-en-rust-849c04e400f26dedabc4a8dff4e33f6d7b742b5ef7bdeac33c33e4a76c5d0447", "text": "} visit::walk_expr(v, e); } struct CheckItemRecursionVisitor<'a, 'ast: 'a> { root_it: &'a Item, sess: &'a Session, ast_map: &'a ast_map::Map<'ast>, def_map: &'a resolve::DefMap, idstack: Vec", "commid": "rust_pr_17264"}], "negative_passages": []}
{"query_id": "q-en-rust-56e33a072d9aec3b63af4a16f4a25f8a93a155cfe3938f1a7297febf189c70df", "query": "Normally when a static variable has a recursive the definition, the compiler issues an error. However, if the static is used (and not just defined) rustc overflows its stack. This program causes a stack overflow: Stack trace:\nConstant checking -- which detects this recursion -- occurs after type checking in the phase order, but type checking needs to evaluate the constant expression in order to find the length of the array type. It looks like strategically inserting a call to in the appropriate place would fix this at the cost of potentially performing the check many times.\nAfter some chatting on IRC, it looks like the best way to fix this is to break the item recursion check out into a separate pass that comes before type checking. This is my first time looking at this part of the compiler, but I'll give it a shot.\nThis approach looks promising. I need to do a full build/test run and then I can submit a PR.\nFixed.", "positive_passages": [{"docid": "doc-en-rust-9775302ddf9ce19807147945c99da99c41a4b5621f377d8220fc29ceda073f38", "text": " // Copyright 2014 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0 euv::RefBinding(..) => { euv::RefBinding(..) | euv::MatchDiscriminant(..) => { format!(\"previous borrow of `{}` occurs here\", self.bccx.loan_path_to_string(&*old_loan.loan_path)) }", "commid": "rust_pr_17413"}], "negative_passages": []}
{"query_id": "q-en-rust-1a12006d024279fa8475b1aa755ca045b88e351de146fd634f30bc96d4b7f187", "query": "Sorry for not providing smaller case but I hope this is better than nothing: Both versions works (compile and start), but runtime behavior changes. The code that follows the above part: will match no matter what really is, if the value is partially moved (so mouse events are updating my game). If I use the version with , the code works as intended and game is updated only on keyboard presses.\nMinimal: This shouldn't compile at all.\nNominating.\nLooks like a regression. 0.10 rejects this code.", "positive_passages": [{"docid": "doc-en-rust-8fb78beaf826eae51154114cba362d288503e49fa4e78f695f6c5266d31fbb7b", "text": "euv::AddrOf | euv::RefBinding | euv::AutoRef | euv::ForLoop => { euv::ForLoop | euv::MatchDiscriminant => { format!(\"cannot borrow {} as mutable\", descr) } euv::ClosureInvocation => {", "commid": "rust_pr_17413"}], "negative_passages": []}
{"query_id": "q-en-rust-1a12006d024279fa8475b1aa755ca045b88e351de146fd634f30bc96d4b7f187", "query": "Sorry for not providing smaller case but I hope this is better than nothing: Both versions works (compile and start), but runtime behavior changes. The code that follows the above part: will match no matter what really is, if the value is partially moved (so mouse events are updating my game). If I use the version with , the code works as intended and game is updated only on keyboard presses.\nMinimal: This shouldn't compile at all.\nNominating.\nLooks like a regression. 0.10 rejects this code.", "positive_passages": [{"docid": "doc-en-rust-016685476a62622310251de040610fe56e03ebee04acd84b80532bca1cbca340", "text": "BorrowViolation(euv::OverloadedOperator) | BorrowViolation(euv::AddrOf) | BorrowViolation(euv::AutoRef) | BorrowViolation(euv::RefBinding) => { BorrowViolation(euv::RefBinding) | BorrowViolation(euv::MatchDiscriminant) => { \"cannot borrow data mutably\" }", "commid": "rust_pr_17413"}], "negative_passages": []}
{"query_id": "q-en-rust-1a12006d024279fa8475b1aa755ca045b88e351de146fd634f30bc96d4b7f187", "query": "Sorry for not providing smaller case but I hope this is better than nothing: Both versions works (compile and start), but runtime behavior changes. The code that follows the above part: will match no matter what really is, if the value is partially moved (so mouse events are updating my game). If I use the version with , the code works as intended and game is updated only on keyboard presses.\nMinimal: This shouldn't compile at all.\nNominating.\nLooks like a regression. 0.10 rejects this code.", "positive_passages": [{"docid": "doc-en-rust-e5f0f286a6044cd03fb2a2515ecf70804946a3cf79394990387079810ffd1680", "text": "OverloadedOperator, ClosureInvocation, ForLoop, MatchDiscriminant } #[deriving(PartialEq,Show)]", "commid": "rust_pr_17413"}], "negative_passages": []}
{"query_id": "q-en-rust-1a12006d024279fa8475b1aa755ca045b88e351de146fd634f30bc96d4b7f187", "query": "Sorry for not providing smaller case but I hope this is better than nothing: Both versions works (compile and start), but runtime behavior changes. The code that follows the above part: will match no matter what really is, if the value is partially moved (so mouse events are updating my game). If I use the version with , the code works as intended and game is updated only on keyboard presses.\nMinimal: This shouldn't compile at all.\nNominating.\nLooks like a regression. 0.10 rejects this code.", "positive_passages": [{"docid": "doc-en-rust-fc09eab11a1b6be20355e489ae2f307b01b9ed4d8003aac0361cbf38e0025f42", "text": "} ast::ExprMatch(ref discr, ref arms) => { // treatment of the discriminant is handled while // walking the arms: self.walk_expr(&**discr); let discr_cmt = return_if_err!(self.mc.cat_expr(&**discr)); self.borrow_expr(&**discr, ty::ReEmpty, ty::ImmBorrow, MatchDiscriminant); // treatment of the discriminant is handled while walking the arms. for arm in arms.iter() { self.walk_arm(discr_cmt.clone(), arm); }", "commid": "rust_pr_17413"}], "negative_passages": []}
{"query_id": "q-en-rust-1a12006d024279fa8475b1aa755ca045b88e351de146fd634f30bc96d4b7f187", "query": "Sorry for not providing smaller case but I hope this is better than nothing: Both versions works (compile and start), but runtime behavior changes. The code that follows the above part: will match no matter what really is, if the value is partially moved (so mouse events are updating my game). If I use the version with , the code works as intended and game is updated only on keyboard presses.\nMinimal: This shouldn't compile at all.\nNominating.\nLooks like a regression. 0.10 rejects this code.", "positive_passages": [{"docid": "doc-en-rust-fd37ac97447eb5eeac81e0070892e0989f6c5d87857a2e1f303e5365166f9853", "text": "ref s => { tcx.sess.span_bug( span, format!(\"ty_region() invoked on in appropriate ty: {:?}\", format!(\"ty_region() invoked on an inappropriate ty: {:?}\", s).as_slice()); } }", "commid": "rust_pr_17413"}], "negative_passages": []}
{"query_id": "q-en-rust-1a12006d024279fa8475b1aa755ca045b88e351de146fd634f30bc96d4b7f187", "query": "Sorry for not providing smaller case but I hope this is better than nothing: Both versions works (compile and start), but runtime behavior changes. The code that follows the above part: will match no matter what really is, if the value is partially moved (so mouse events are updating my game). If I use the version with , the code works as intended and game is updated only on keyboard presses.\nMinimal: This shouldn't compile at all.\nNominating.\nLooks like a regression. 0.10 rejects this code.", "positive_passages": [{"docid": "doc-en-rust-82268035238162cf5088440a8a821f1f3c0861f2faaefd58bb2094295a4d3388", "text": " // Copyright 2014 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0 // Copyright 2014 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0 ", "commid": "rust_pr_17413"}], "negative_passages": []}
{"query_id": "q-en-rust-4d672fa79ea5dc4ca0b03f6e9c158143c4fef169bb3527001c83c30347914a32", "query": "Couldn't find another issue for this. The error message for this is poor and should suggest qualifying the type, or at least clearly saying that a value from cannot be used as a type, and the span should then help the user understand (it is currently fine): Error message comes from astconv, so we could probably catch this in resolve?\n(I'll update the example so that the bug is still demonstrable.)", "positive_passages": [{"docid": "doc-en-rust-f9b93a71773c25d0181f68e5cbca4b40914917a202ba1d03bf7416703b9bc8bc", "text": "{ let parent = self.get_parent(id); let parent = match self.find_entry(id) { Some(EntryForeignItem(..)) | Some(EntryVariant(..)) => { // Anonymous extern items, enum variants and struct ctors // go in the parent scope. Some(EntryForeignItem(..)) => { // Anonymous extern items go in the parent scope. self.get_parent(parent) } // But tuple struct ctors don't have names, so use the path of its", "commid": "rust_pr_27085"}], "negative_passages": []}
{"query_id": "q-en-rust-4d672fa79ea5dc4ca0b03f6e9c158143c4fef169bb3527001c83c30347914a32", "query": "Couldn't find another issue for this. The error message for this is poor and should suggest qualifying the type, or at least clearly saying that a value from cannot be used as a type, and the span should then help the user understand (it is currently fine): Error message comes from astconv, so we could probably catch this in resolve?\n(I'll update the example so that the bug is still demonstrable.)", "positive_passages": [{"docid": "doc-en-rust-ebcd74389ede4290e4c5af07f2ed630e582f9ea4841a1e24fa3cb10e21e867f0", "text": "prim_ty_to_ty(tcx, base_segments, prim_ty) } _ => { let node = def.def_id().node; span_err!(tcx.sess, span, E0248, \"found value name used as a type: {:?}\", *def); \"found value `{}` used as a type\", tcx.map.path_to_string(node)); return this.tcx().types.err; } }", "commid": "rust_pr_27085"}], "negative_passages": []}
{"query_id": "q-en-rust-4d672fa79ea5dc4ca0b03f6e9c158143c4fef169bb3527001c83c30347914a32", "query": "Couldn't find another issue for this. The error message for this is poor and should suggest qualifying the type, or at least clearly saying that a value from cannot be used as a type, and the span should then help the user understand (it is currently fine): Error message comes from astconv, so we could probably catch this in resolve?\n(I'll update the example so that the bug is still demonstrable.)", "positive_passages": [{"docid": "doc-en-rust-811b351450a0343ce7380c32cc93caa1cd1687e0fd4976fbf56f70b8ef1bb520", "text": "Bar } fn foo(x: Foo::Bar) {} //~ERROR found value name used as a type fn foo(x: Foo::Bar) {} //~ERROR found value `Foo::Bar` used as a type fn main() {}", "commid": "rust_pr_27085"}], "negative_passages": []}
{"query_id": "q-en-rust-4d672fa79ea5dc4ca0b03f6e9c158143c4fef169bb3527001c83c30347914a32", "query": "Couldn't find another issue for this. The error message for this is poor and should suggest qualifying the type, or at least clearly saying that a value from cannot be used as a type, and the span should then help the user understand (it is currently fine): Error message comes from astconv, so we could probably catch this in resolve?\n(I'll update the example so that the bug is still demonstrable.)", "positive_passages": [{"docid": "doc-en-rust-95485d66c01546882055e9a56fa31bbf266fc4f7536ce706a455c968636d3b71", "text": " // Copyright 2015 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0 //~| ERROR assigned_leaf_path: `($(local p) as Zero)` //~| ERROR assigned_leaf_path: `($(local p) as Lonely::Zero)` match p { Zero(..) => {} _ => {}", "commid": "rust_pr_27085"}], "negative_passages": []}
{"query_id": "q-en-rust-4d672fa79ea5dc4ca0b03f6e9c158143c4fef169bb3527001c83c30347914a32", "query": "Couldn't find another issue for this. The error message for this is poor and should suggest qualifying the type, or at least clearly saying that a value from cannot be used as a type, and the span should then help the user understand (it is currently fine): Error message comes from astconv, so we could probably catch this in resolve?\n(I'll update the example so that the bug is still demonstrable.)", "positive_passages": [{"docid": "doc-en-rust-b37d649e1458a97ca15d9db1ce2444c5522bd71d252718f893a08553101c3647", "text": "#[rustc_move_fragments] pub fn test_match_full(p: Lonely //~| ERROR assigned_leaf_path: `($(local p) as Zero)` //~| ERROR assigned_leaf_path: `($(local p) as One)` //~| ERROR assigned_leaf_path: `($(local p) as Two)` //~| ERROR assigned_leaf_path: `($(local p) as Lonely::Zero)` //~| ERROR assigned_leaf_path: `($(local p) as Lonely::One)` //~| ERROR assigned_leaf_path: `($(local p) as Lonely::Two)` match p { Zero(..) => {} One(..) => {}", "commid": "rust_pr_27085"}], "negative_passages": []}
{"query_id": "q-en-rust-4d672fa79ea5dc4ca0b03f6e9c158143c4fef169bb3527001c83c30347914a32", "query": "Couldn't find another issue for this. The error message for this is poor and should suggest qualifying the type, or at least clearly saying that a value from cannot be used as a type, and the span should then help the user understand (it is currently fine): Error message comes from astconv, so we could probably catch this in resolve?\n(I'll update the example so that the bug is still demonstrable.)", "positive_passages": [{"docid": "doc-en-rust-e571967450a62871ffebeeeafab3b647ccbd345a34489715c4ece4999cc2b778", "text": "#[rustc_move_fragments] pub fn test_match_bind_one(p: Lonely //~| ERROR assigned_leaf_path: `($(local p) as Zero)` //~| ERROR parent_of_fragments: `($(local p) as One)` //~| ERROR moved_leaf_path: `($(local p) as One).#0` //~| ERROR assigned_leaf_path: `($(local p) as Two)` //~| ERROR assigned_leaf_path: `($(local p) as Lonely::Zero)` //~| ERROR parent_of_fragments: `($(local p) as Lonely::One)` //~| ERROR moved_leaf_path: `($(local p) as Lonely::One).#0` //~| ERROR assigned_leaf_path: `($(local p) as Lonely::Two)` //~| ERROR assigned_leaf_path: `$(local data)` match p { Zero(..) => {}", "commid": "rust_pr_27085"}], "negative_passages": []}
{"query_id": "q-en-rust-4d672fa79ea5dc4ca0b03f6e9c158143c4fef169bb3527001c83c30347914a32", "query": "Couldn't find another issue for this. The error message for this is poor and should suggest qualifying the type, or at least clearly saying that a value from cannot be used as a type, and the span should then help the user understand (it is currently fine): Error message comes from astconv, so we could probably catch this in resolve?\n(I'll update the example so that the bug is still demonstrable.)", "positive_passages": [{"docid": "doc-en-rust-3055714d0a0076ed06eb06970e8c5a2fefad109c00b2fa737b9b13c45f77e277", "text": "#[rustc_move_fragments] pub fn test_match_bind_many(p: Lonely //~| ERROR assigned_leaf_path: `($(local p) as Zero)` //~| ERROR parent_of_fragments: `($(local p) as One)` //~| ERROR moved_leaf_path: `($(local p) as One).#0` //~| ERROR assigned_leaf_path: `($(local p) as Lonely::Zero)` //~| ERROR parent_of_fragments: `($(local p) as Lonely::One)` //~| ERROR moved_leaf_path: `($(local p) as Lonely::One).#0` //~| ERROR assigned_leaf_path: `$(local data)` //~| ERROR parent_of_fragments: `($(local p) as Two)` //~| ERROR moved_leaf_path: `($(local p) as Two).#0` //~| ERROR moved_leaf_path: `($(local p) as Two).#1` //~| ERROR parent_of_fragments: `($(local p) as Lonely::Two)` //~| ERROR moved_leaf_path: `($(local p) as Lonely::Two).#0` //~| ERROR moved_leaf_path: `($(local p) as Lonely::Two).#1` //~| ERROR assigned_leaf_path: `$(local left)` //~| ERROR assigned_leaf_path: `$(local right)` match p {", "commid": "rust_pr_27085"}], "negative_passages": []}
{"query_id": "q-en-rust-4d672fa79ea5dc4ca0b03f6e9c158143c4fef169bb3527001c83c30347914a32", "query": "Couldn't find another issue for this. The error message for this is poor and should suggest qualifying the type, or at least clearly saying that a value from cannot be used as a type, and the span should then help the user understand (it is currently fine): Error message comes from astconv, so we could probably catch this in resolve?\n(I'll update the example so that the bug is still demonstrable.)", "positive_passages": [{"docid": "doc-en-rust-d2ad1581699f1bf6f4b31bb4c3edda02cf70003bb64b97852be0570474e26fa5", "text": " // Copyright 2014 The Rust Project Developers. See the COPYRIGHT // Copyright 2014-2015 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. //", "commid": "rust_pr_27085"}], "negative_passages": []}
{"query_id": "q-en-rust-4d672fa79ea5dc4ca0b03f6e9c158143c4fef169bb3527001c83c30347914a32", "query": "Couldn't find another issue for this. The error message for this is poor and should suggest qualifying the type, or at least clearly saying that a value from cannot be used as a type, and the span should then help the user understand (it is currently fine): Error message comes from astconv, so we could probably catch this in resolve?\n(I'll update the example so that the bug is still demonstrable.)", "positive_passages": [{"docid": "doc-en-rust-072931f0a08736b1d98904e8786be145a1e189dd6632aa1e277cc87101b84593", "text": "#[rustc_move_fragments] pub fn test_match_bind_and_underscore(p: Lonely //~| ERROR assigned_leaf_path: `($(local p) as Zero)` //~| ERROR assigned_leaf_path: `($(local p) as One)` //~| ERROR parent_of_fragments: `($(local p) as Two)` //~| ERROR moved_leaf_path: `($(local p) as Two).#0` //~| ERROR unmoved_fragment: `($(local p) as Two).#1` //~| ERROR assigned_leaf_path: `($(local p) as Lonely::Zero)` //~| ERROR assigned_leaf_path: `($(local p) as Lonely::One)` //~| ERROR parent_of_fragments: `($(local p) as Lonely::Two)` //~| ERROR moved_leaf_path: `($(local p) as Lonely::Two).#0` //~| ERROR unmoved_fragment: `($(local p) as Lonely::Two).#1` //~| ERROR assigned_leaf_path: `$(local left)` match p {", "commid": "rust_pr_27085"}], "negative_passages": []}
{"query_id": "q-en-rust-6452d80eb6b79925fe19a83cddf32060a56d592ce05d1e5097093835a315393c", "query": "Hi, I've been giving rust a try and hit my first compiler bug today. I was playing around with traits and ran into an error with the following code: In particular it seems it's the last line in that breaks things, if I comment that out it compiles (and runs) without issue. Here's output from rustc: I am using on OS X 10.9.4\nLooke like this no longer ICE-s: Now produces: Which seems like the right error message. Please reopen if you are still having trouble. Please reopen if you are still", "positive_passages": [{"docid": "doc-en-rust-19132abfb27af4f49ca30d8e2d0e18e3a621078a8576851670f89aa7943f27a1", "text": "**Source:** https://github.com/rust-lang/rust-analyzer/blob/master/crates/rust-analyzer/src/config.rs[config.rs] The <<_installation,Installation>> section contains details on configuration for some of the editors. The < The <<_installation,Installation>> section contains details on configuration for some of the editors. The < The <<_installation,Installation>> section contains details on configuration for some of the editors. The < fn write(&mut self, buf: &[u8]) -> IoResult<()> { self.write(buf) } fn write(&mut self, buf: &[u8]) -> IoResult<()> { (**self).write(buf) } #[inline] fn flush(&mut self) -> IoResult<()> { self.flush() } fn flush(&mut self) -> IoResult<()> { (**self).flush() } } /// A `RefWriter` is a struct implementing `Writer` which contains a reference", "commid": "rust_pr_17772"}], "negative_passages": []}
{"query_id": "q-en-rust-ab800df65a92713515962d1913ca9ea6d1199b4c294c6be9c5f5547a986df8da", "query": "Noticed in . This is either a really odd way to ask for the version, or a bit of advice that's no longer relevant. What should we say instead? is a bit too_ verbose \u2014 I killed it after it wrote 8.1 GB of logs compiling .\nI think the intent was indeed to get the version output. I think the phrasing is only odd in that it does not make that intent clear. (It is possible I am wrong about the intent here.) But there is an additional problem: Sometime between 0.10 and 0.11, we revised the / output to ensure it would be only one line when you do not supply an argument to . This means we lose potentially relevant information, namely the host system type (e.g. or ). We could fix both of these problems by revising the instructions to say something like:\nClosing since was merged.", "positive_passages": [{"docid": "doc-en-rust-f47b1748fd17d8d4077a5d4299b23180667da74da10daaf668f7be18b7c6ea1e", "text": "It generally helps our diagnosis to include your specific OS (for example: Mac OS X 10.8.3, Windows 7, Ubuntu 12.04) and your hardware architecture (for example: i686, x86_64). It's also helpful to copy/paste the output of re-running the erroneous rustc command with the `-v` flag. Finally, if you can run the offending command under gdb, pasting a stack trace can be useful; to do so, you will need to set a breakpoint on `rust_fail`. It's also helpful to provide the exact version and host by copying the output of re-running the erroneous rustc command with the `--version=verbose` flag, which will produce something like this: ```{ignore} rustc 0.12.0 (ba4081a5a 2014-10-07 13:44:41 -0700) binary: rustc commit-hash: ba4081a5a8573875fed17545846f6f6902c8ba8d commit-date: 2014-10-07 13:44:41 -0700 host: i686-apple-darwin release: 0.12.0 ``` Finally, if you can run the offending command under gdb, pasting a stack trace can be useful; to do so, you will need to set a breakpoint on `rust_fail`. # I submitted a bug, but nobody has commented on it!", "commid": "rust_pr_18217"}], "negative_passages": []}
{"query_id": "q-en-rust-ec25fd14d88d508f5ac3114434115077f45a6cdb8cd231156e128750ad9f21e6", "query": "When trying to do development on the stage1 compiler, I've been hitting a link error. I suspect something has gone wrong in our make dependencies because I think the problem goes away if you first do a full build before doing (I think, though I have not double-checked that scenario from scratch in a clean build dir yet).\nAlso, needs to be written much like itself, in that it needs to be able to build under a snapshot compiler. I have been encountering issues with its use of slicing syntax due to how the associated feature gate has come and gone.\nI know there is well-founded opposition to gating a PR on passing, but wouldn't just adding to the set of required build products at each stage during boot strap prevent errors like this in the future?\n(sigh; the issue may be isolated to the snapshot compiler ... my attempts to reproduce with have not managed to duplicate the problem here...)\nSo at this point its probably not feasible to fix the issue as described here, since I am pretty sure it is isolated to a problem in the snapshot compiler. But we can and should still try to increase the coverage of bors to handle at least building from a snapshot, if possible. (Maybe its more a problem with how we check the state of our snapshots?)\nAt least building seem reasonable! Running the tests may end up just making a longer cycle time longer unfortunately :(. (but compiletest is fast to build)\nIn case anyone is curious, the problem also occurs on Linux. I would be curious to know if taking a new snapshot would fix this. I'm not exactly sure how best to test that theory; my attempts to use to test this theory have been somewhat flummoxed by different problems.\nIt does indeed seem like if I first do , after the but before the , then things proceed just fine. This is evidence for my original hypothesis that something has gone wrong in our make dependencies. (odd that i could not reproduce the problem via another build though. Then again, there is still some weird rpath-ish stuff in our OS X builds of that I imagine could easily mask a problem like this.\n(PR only fixes the build issue; it does not attempt to add to the bors cycle.)\n(PR also does not actually fix the problem I mentioned in comment: . That can be resolved separately; but I will revise the PR comment to not say that it \"fixes\" this issue.)\n(I edited the PR description but forgot that github closes issues based on the original commit message.) I have a follow-on patch that resolves this in more fundamental ways, e.g. by both fixing the build itself, and also adding rules that will force bors to gate on continuing to build.", "positive_passages": [{"docid": "doc-en-rust-ecab79986a5f5b15d76efb8b078cd8104d835fe14f2319271e612c58a99a54c7", "text": "# Some less critical tests that are not prone to breakage. # Not run as part of the normal test suite, but tested by bors on checkin. check-secondary: check-lexer check-pretty check-secondary: check-build-compiletest check-lexer check-pretty # check + check-secondary. check-all: check check-secondary # # Issue #17883: build check-secondary first so hidden dependencies in # e.g. building compiletest are exercised (resolve those by adding # deps to rules that need them; not by putting `check` first here). check-all: check-secondary check # Pretty-printing tests. check-pretty: check-stage2-T-$(CFG_BUILD)-H-$(CFG_BUILD)-pretty-exec define DEF_CHECK_BUILD_COMPILETEST_FOR_STAGE check-stage$(1)-build-compiletest: \t$$(HBIN$(1)_H_$(CFG_BUILD))/compiletest$$(X_$(CFG_BUILD)) endef $(foreach stage,$(STAGES), $(eval $(call DEF_CHECK_BUILD_COMPILETEST_FOR_STAGE,$(stage)))) check-build-compiletest: check-stage1-build-compiletest check-stage2-build-compiletest .PHONY: cleantmptestlogs cleantestlibs cleantmptestlogs:", "commid": "rust_pr_18012"}], "negative_passages": []}
{"query_id": "q-en-rust-ec25fd14d88d508f5ac3114434115077f45a6cdb8cd231156e128750ad9f21e6", "query": "When trying to do development on the stage1 compiler, I've been hitting a link error. I suspect something has gone wrong in our make dependencies because I think the problem goes away if you first do a full build before doing (I think, though I have not double-checked that scenario from scratch in a clean build dir yet).\nAlso, needs to be written much like itself, in that it needs to be able to build under a snapshot compiler. I have been encountering issues with its use of slicing syntax due to how the associated feature gate has come and gone.\nI know there is well-founded opposition to gating a PR on passing, but wouldn't just adding to the set of required build products at each stage during boot strap prevent errors like this in the future?\n(sigh; the issue may be isolated to the snapshot compiler ... my attempts to reproduce with have not managed to duplicate the problem here...)\nSo at this point its probably not feasible to fix the issue as described here, since I am pretty sure it is isolated to a problem in the snapshot compiler. But we can and should still try to increase the coverage of bors to handle at least building from a snapshot, if possible. (Maybe its more a problem with how we check the state of our snapshots?)\nAt least building seem reasonable! Running the tests may end up just making a longer cycle time longer unfortunately :(. (but compiletest is fast to build)\nIn case anyone is curious, the problem also occurs on Linux. I would be curious to know if taking a new snapshot would fix this. I'm not exactly sure how best to test that theory; my attempts to use to test this theory have been somewhat flummoxed by different problems.\nIt does indeed seem like if I first do , after the but before the , then things proceed just fine. This is evidence for my original hypothesis that something has gone wrong in our make dependencies. (odd that i could not reproduce the problem via another build though. Then again, there is still some weird rpath-ish stuff in our OS X builds of that I imagine could easily mask a problem like this.\n(PR only fixes the build issue; it does not attempt to add to the bors cycle.)\n(PR also does not actually fix the problem I mentioned in comment: . That can be resolved separately; but I will revise the PR comment to not say that it \"fixes\" this issue.)\n(I edited the PR description but forgot that github closes issues based on the original commit message.) I have a follow-on patch that resolves this in more fundamental ways, e.g. by both fixing the build itself, and also adding rules that will force bors to gate on continuing to build.", "positive_passages": [{"docid": "doc-en-rust-29310bb14b15d5beb5024aa6f9b12246d5e25e23b7009e52024a78850fc668c5", "text": "PRETTY_DEPS_pretty-rfail = $(RFAIL_TESTS) PRETTY_DEPS_pretty-bench = $(BENCH_TESTS) PRETTY_DEPS_pretty-pretty = $(PRETTY_TESTS) # The stage- and host-specific dependencies are for e.g. macro_crate_test which pulls in # external crates. PRETTY_DEPS$(1)_H_$(3)_pretty-rpass = PRETTY_DEPS$(1)_H_$(3)_pretty-rpass-full = $$(HLIB$(1)_H_$(3))/stamp.syntax $$(HLIB$(1)_H_$(3))/stamp.rustc PRETTY_DEPS$(1)_H_$(3)_pretty-rfail = PRETTY_DEPS$(1)_H_$(3)_pretty-bench = PRETTY_DEPS$(1)_H_$(3)_pretty-pretty = PRETTY_DIRNAME_pretty-rpass = run-pass PRETTY_DIRNAME_pretty-rpass-full = run-pass-fulldeps PRETTY_DIRNAME_pretty-rfail = run-fail", "commid": "rust_pr_18012"}], "negative_passages": []}
{"query_id": "q-en-rust-ec25fd14d88d508f5ac3114434115077f45a6cdb8cd231156e128750ad9f21e6", "query": "When trying to do development on the stage1 compiler, I've been hitting a link error. I suspect something has gone wrong in our make dependencies because I think the problem goes away if you first do a full build before doing (I think, though I have not double-checked that scenario from scratch in a clean build dir yet).\nAlso, needs to be written much like itself, in that it needs to be able to build under a snapshot compiler. I have been encountering issues with its use of slicing syntax due to how the associated feature gate has come and gone.\nI know there is well-founded opposition to gating a PR on passing, but wouldn't just adding to the set of required build products at each stage during boot strap prevent errors like this in the future?\n(sigh; the issue may be isolated to the snapshot compiler ... my attempts to reproduce with have not managed to duplicate the problem here...)\nSo at this point its probably not feasible to fix the issue as described here, since I am pretty sure it is isolated to a problem in the snapshot compiler. But we can and should still try to increase the coverage of bors to handle at least building from a snapshot, if possible. (Maybe its more a problem with how we check the state of our snapshots?)\nAt least building seem reasonable! Running the tests may end up just making a longer cycle time longer unfortunately :(. (but compiletest is fast to build)\nIn case anyone is curious, the problem also occurs on Linux. I would be curious to know if taking a new snapshot would fix this. I'm not exactly sure how best to test that theory; my attempts to use to test this theory have been somewhat flummoxed by different problems.\nIt does indeed seem like if I first do , after the but before the , then things proceed just fine. This is evidence for my original hypothesis that something has gone wrong in our make dependencies. (odd that i could not reproduce the problem via another build though. Then again, there is still some weird rpath-ish stuff in our OS X builds of that I imagine could easily mask a problem like this.\n(PR only fixes the build issue; it does not attempt to add to the bors cycle.)\n(PR also does not actually fix the problem I mentioned in comment: . That can be resolved separately; but I will revise the PR comment to not say that it \"fixes\" this issue.)\n(I edited the PR description but forgot that github closes issues based on the original commit message.) I have a follow-on patch that resolves this in more fundamental ways, e.g. by both fixing the build itself, and also adding rules that will force bors to gate on continuing to build.", "positive_passages": [{"docid": "doc-en-rust-a5611babb5a32dcc78fe69805b515f2412dc6a1727c35003c1159f83aeef126e", "text": "$$(call TEST_OK_FILE,$(1),$(2),$(3),$(4)): $$(TEST_SREQ$(1)_T_$(2)_H_$(3)) $$(PRETTY_DEPS_$(4)) $$(PRETTY_DEPS_$(4)) $$(PRETTY_DEPS$(1)_H_$(3)_$(4)) @$$(call E, run pretty-rpass [$(2)]: $$<) $$(Q)$$(call CFG_RUN_CTEST_$(2),$(1),$$<,$(3)) $$(PRETTY_ARGS$(1)-T-$(2)-H-$(3)-$(4)) ", "commid": "rust_pr_18012"}], "negative_passages": []}
{"query_id": "q-en-rust-2bd08bac4593184fa1555a33ac9920af931f01de868bc8dc2f6eca460a1ed72a", "query": "Technically speaking the negative duration is not the valid ISO 8601, but we need to print it anyway. This code: Produces: Should be , in fact Should be , in fact\nCc", "positive_passages": [{"docid": "doc-en-rust-b366ef045ef1049cdf63d4d8f45bb12c9863cd4306c56a94b3af612efa8efa2a", "text": "impl fmt::Show for Duration { fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { let days = self.num_days(); let secs = self.secs - days * SECS_PER_DAY; // technically speaking, negative duration is not valid ISO 8601, // but we need to print it anyway. let (abs, sign) = if self.secs < 0 { (-self, \"-\") } else { (*self, \"\") }; let days = abs.secs / SECS_PER_DAY; let secs = abs.secs - days * SECS_PER_DAY; let hasdate = days != 0; let hastime = (secs != 0 || self.nanos != 0) || !hasdate; let hastime = (secs != 0 || abs.nanos != 0) || !hasdate; try!(write!(f, \"{}P\", sign)); try!(write!(f, \"P\")); if hasdate { // technically speaking the negative part is not the valid ISO 8601, // but we need to print it anyway. try!(write!(f, \"{}D\", days)); } if hastime { if self.nanos == 0 { if abs.nanos == 0 { try!(write!(f, \"T{}S\", secs)); } else if self.nanos % NANOS_PER_MILLI == 0 { try!(write!(f, \"T{}.{:03}S\", secs, self.nanos / NANOS_PER_MILLI)); } else if self.nanos % NANOS_PER_MICRO == 0 { try!(write!(f, \"T{}.{:06}S\", secs, self.nanos / NANOS_PER_MICRO)); } else if abs.nanos % NANOS_PER_MILLI == 0 { try!(write!(f, \"T{}.{:03}S\", secs, abs.nanos / NANOS_PER_MILLI)); } else if abs.nanos % NANOS_PER_MICRO == 0 { try!(write!(f, \"T{}.{:06}S\", secs, abs.nanos / NANOS_PER_MICRO)); } else { try!(write!(f, \"T{}.{:09}S\", secs, self.nanos)); try!(write!(f, \"T{}.{:09}S\", secs, abs.nanos)); } } Ok(())", "commid": "rust_pr_18359"}], "negative_passages": []}
{"query_id": "q-en-rust-2bd08bac4593184fa1555a33ac9920af931f01de868bc8dc2f6eca460a1ed72a", "query": "Technically speaking the negative duration is not the valid ISO 8601, but we need to print it anyway. This code: Produces: Should be , in fact Should be , in fact\nCc", "positive_passages": [{"docid": "doc-en-rust-d69a06d5cf88439ab664c2a05f9d905ae5bc70a4ea96bd64850c9782d9b56be5", "text": "let d: Duration = Zero::zero(); assert_eq!(d.to_string(), \"PT0S\".to_string()); assert_eq!(Duration::days(42).to_string(), \"P42D\".to_string()); assert_eq!(Duration::days(-42).to_string(), \"P-42D\".to_string()); assert_eq!(Duration::days(-42).to_string(), \"-P42D\".to_string()); assert_eq!(Duration::seconds(42).to_string(), \"PT42S\".to_string()); assert_eq!(Duration::milliseconds(42).to_string(), \"PT0.042S\".to_string()); assert_eq!(Duration::microseconds(42).to_string(), \"PT0.000042S\".to_string()); assert_eq!(Duration::nanoseconds(42).to_string(), \"PT0.000000042S\".to_string()); assert_eq!((Duration::days(7) + Duration::milliseconds(6543)).to_string(), \"P7DT6.543S\".to_string()); assert_eq!(Duration::seconds(-86401).to_string(), \"-P1DT1S\".to_string()); assert_eq!(Duration::nanoseconds(-1).to_string(), \"-PT0.000000001S\".to_string()); // the format specifier should have no effect on `Duration` assert_eq!(format!(\"{:30}\", Duration::days(1) + Duration::milliseconds(2345)),", "commid": "rust_pr_18359"}], "negative_passages": []}
{"query_id": "q-en-rust-28bc114395ce3c204b6ca391330a1753c30a2c95379e09e92a3c36c37a321a3c", "query": "Linking may fail when an incompatible installation of MinGW is on the PATH, because rustc ends up running external linker instead of the bundled one. I keep seeing people running into this problem when using WIndows installer. The solution it to run rustc in a console with stripped-down PATH, but this workaround is very nonobvious. I am afraid this will negatively affect perception of quality of our Windows distribution because many early adopters will have some version of mingw already installed on their machines. I think the earlier decision to make rustc prefer bundled binaries () was the correct one. We should un-revert , and fix cargo build instead.\ncc , , rust-lang/cargo\nRegarding cargo, I think we can take one of the following approaches: Use the snapshot compiler. Specify linker path explicitly via . Delete/rename directory after installing. Add a new command line flag to rustc for controlling this behavior. Perhaps something like ? cc\nIssues like are definitely quite worrying, but I would see this as more than just a problem with Cargo but also extending to any project relying on libs installed on windows by some compiler toolchain. The bundled gcc doesn't know anything about system lookup paths, and I see this as akin to saying we should run gcc on unix with no default lookup paths (which would likely be disastrous). I suspect that there are other projects which rely on libs coming from a MinGW, and this seems similar to relying on randomly installed libraries on unix systems, so I'm not sure if it's necessarily a bad thing. Do we know why the error in is occurring? It looks like there's an incompatibility with the MinGW installation, but do we know precisely what it is? I was under the impression that linkers were intended to interoperate well...\nI don't really want gcc to be able to find arbitrary mingw libraries because I don't want Rust users on Windows to depend on mingw (imagine cargo packages that expect mingw libs but only a fraction of windows users actually have it installed). I've misunderstood the implications of this issue before, but my inclination is to do the revert and make the cargo build do the extra work to get at the libs it needs.\nNominating since this issue gets hit pretty often and is the last major usability problem on windows.\nNot quite: mingw gcc has a default library search path built-it, and it is expressed relative to the installation point. So the one we bundle is going to look for libraries only under (unless we supply more via , of course). This is mostly about library compatibility, not about linker. GCC may emit references to library symbols, that may not exist/be buggy in older versions of mingw. This particular one probably happened because the developer had installed mingw with sjlj exception handling, which defines UnwindSjLjResume instead of UnwindResume.\nAssigning P-high, 1.0.\nNot being able to access the full set of mingw libs means that people can't link to random GNU libs (which I'm in favor of). I'm unclear on whether it also prevents people from linking to various windows system libraries (I have this vague idea that mingw includes artifacts that make the linker able to link to e.g. ). Is there any concern there?\ngcc needs to recognize system library (), so picks up . We ship , but we don't ship or so , rust-lang/cargo, and occurs. So, what's the role of bundle? I thought bundle is for providing \"minimal\" linking environment so that beginners can build hello-world without installing mingw-w64. In this scenario, if user want to do more than bundle can provide (e.g. gdi32), user must install mingw-w64. At least, this is current status. Maybe we can bundle ? is 72M.\nPreferring bundle means: if user has old mingw, will not use it. This solves lots of of beginners' issues. , if user really wants to use their own linker, user need to do something. This complicates build story. So my suggestion is 1) prefer bundle and 2) add a flag to not use bundle at all (rather than changing preference). Maybe or ?\non the other hand, there are a few libs that need to be matched to the version of gcc that Rust was built with. It's mostly stuff like and ; possibly also ,. So simply deferring to external linker probably isn't a good idea, unless we ensure that our bundled libs are being found first. I kinda like the idea of shipping the minimal set of import libraries that is sufficient for linking Rust's standard library. We could add a few more common ones like , but everything is a bit much, IMHO. For reference, here's the list of import libraries that D ships with: , , , , , ,, ,, , , , , , , , . Btw, mingw can also link to dlls directly, so in many cases adding to the command line is enough.\nld can also link to dlls directly? Is it possible to #link directly in them in source? #[link(path = \"%systemroot%system32\")] or something like that?\nThat D ships a set of system import libraries is encouraging. Though if mingw can link to dlls directly it's not clear why this is such a problem - if we were to just have rustc add the system32 path by default would this problem with windows import libraries go away completely?", "positive_passages": [{"docid": "doc-en-rust-5a64775c8133565cc5ef4a9d94cd7af013ee6fd714f1702d23e068ee505df233", "text": "return found def make_win_dist(dist_root, target_triple): # Ask gcc where it keeps its' stuff # Ask gcc where it keeps its stuff gcc_out = subprocess.check_output([\"gcc.exe\", \"-print-search-dirs\"]) bin_path = os.environ[\"PATH\"].split(os.pathsep) lib_path = []", "commid": "rust_pr_18797"}], "negative_passages": []}
{"query_id": "q-en-rust-28bc114395ce3c204b6ca391330a1753c30a2c95379e09e92a3c36c37a321a3c", "query": "Linking may fail when an incompatible installation of MinGW is on the PATH, because rustc ends up running external linker instead of the bundled one. I keep seeing people running into this problem when using WIndows installer. The solution it to run rustc in a console with stripped-down PATH, but this workaround is very nonobvious. I am afraid this will negatively affect perception of quality of our Windows distribution because many early adopters will have some version of mingw already installed on their machines. I think the earlier decision to make rustc prefer bundled binaries () was the correct one. We should un-revert , and fix cargo build instead.\ncc , , rust-lang/cargo\nRegarding cargo, I think we can take one of the following approaches: Use the snapshot compiler. Specify linker path explicitly via . Delete/rename directory after installing. Add a new command line flag to rustc for controlling this behavior. Perhaps something like ? cc\nIssues like are definitely quite worrying, but I would see this as more than just a problem with Cargo but also extending to any project relying on libs installed on windows by some compiler toolchain. The bundled gcc doesn't know anything about system lookup paths, and I see this as akin to saying we should run gcc on unix with no default lookup paths (which would likely be disastrous). I suspect that there are other projects which rely on libs coming from a MinGW, and this seems similar to relying on randomly installed libraries on unix systems, so I'm not sure if it's necessarily a bad thing. Do we know why the error in is occurring? It looks like there's an incompatibility with the MinGW installation, but do we know precisely what it is? I was under the impression that linkers were intended to interoperate well...\nI don't really want gcc to be able to find arbitrary mingw libraries because I don't want Rust users on Windows to depend on mingw (imagine cargo packages that expect mingw libs but only a fraction of windows users actually have it installed). I've misunderstood the implications of this issue before, but my inclination is to do the revert and make the cargo build do the extra work to get at the libs it needs.\nNominating since this issue gets hit pretty often and is the last major usability problem on windows.\nNot quite: mingw gcc has a default library search path built-it, and it is expressed relative to the installation point. So the one we bundle is going to look for libraries only under (unless we supply more via , of course). This is mostly about library compatibility, not about linker. GCC may emit references to library symbols, that may not exist/be buggy in older versions of mingw. This particular one probably happened because the developer had installed mingw with sjlj exception handling, which defines UnwindSjLjResume instead of UnwindResume.\nAssigning P-high, 1.0.\nNot being able to access the full set of mingw libs means that people can't link to random GNU libs (which I'm in favor of). I'm unclear on whether it also prevents people from linking to various windows system libraries (I have this vague idea that mingw includes artifacts that make the linker able to link to e.g. ). Is there any concern there?\ngcc needs to recognize system library (), so picks up . We ship , but we don't ship or so , rust-lang/cargo, and occurs. So, what's the role of bundle? I thought bundle is for providing \"minimal\" linking environment so that beginners can build hello-world without installing mingw-w64. In this scenario, if user want to do more than bundle can provide (e.g. gdi32), user must install mingw-w64. At least, this is current status. Maybe we can bundle ? is 72M.\nPreferring bundle means: if user has old mingw, will not use it. This solves lots of of beginners' issues. , if user really wants to use their own linker, user need to do something. This complicates build story. So my suggestion is 1) prefer bundle and 2) add a flag to not use bundle at all (rather than changing preference). Maybe or ?\non the other hand, there are a few libs that need to be matched to the version of gcc that Rust was built with. It's mostly stuff like and ; possibly also ,. So simply deferring to external linker probably isn't a good idea, unless we ensure that our bundled libs are being found first. I kinda like the idea of shipping the minimal set of import libraries that is sufficient for linking Rust's standard library. We could add a few more common ones like , but everything is a bit much, IMHO. For reference, here's the list of import libraries that D ships with: , , , , , ,, ,, , , , , , , , . Btw, mingw can also link to dlls directly, so in many cases adding to the command line is enough.\nld can also link to dlls directly? Is it possible to #link directly in them in source? #[link(path = \"%systemroot%system32\")] or something like that?\nThat D ships a set of system import libraries is encouraging. Though if mingw can link to dlls directly it's not clear why this is such a problem - if we were to just have rustc add the system32 path by default would this problem with windows import libraries go away completely?", "positive_passages": [{"docid": "doc-en-rust-7352202efc5f8c2b688f1c17f135a04122fba4f87c2675c2eed1b068e1714fde", "text": "else: rustc_dlls.append(\"libgcc_s_seh-1.dll\") target_libs = [\"crtbegin.o\", \"crtend.o\", \"crt2.o\", \"dllcrt2.o\", \"libadvapi32.a\", \"libcrypt32.a\", \"libgcc.a\", \"libgcc_eh.a\", \"libgcc_s.a\", \"libimagehlp.a\", \"libiphlpapi.a\", \"libkernel32.a\", \"libm.a\", \"libmingw32.a\", \"libmingwex.a\", \"libmsvcrt.a\", \"libpsapi.a\", \"libshell32.a\", \"libstdc++.a\", \"libuser32.a\", \"libws2_32.a\", \"libiconv.a\", \"libmoldname.a\"] target_libs = [ # MinGW libs \"crtbegin.o\", \"crtend.o\", \"crt2.o\", \"dllcrt2.o\", \"libgcc.a\", \"libgcc_eh.a\", \"libgcc_s.a\", \"libm.a\", \"libmingw32.a\", \"libmingwex.a\", \"libstdc++.a\", \"libiconv.a\", \"libmoldname.a\", # Windows import libs \"libadvapi32.a\", \"libbcrypt.a\", \"libcomctl32.a\", \"libcomdlg32.a\", \"libcrypt32.a\", \"libctl3d32.a\", \"libgdi32.a\", \"libimagehlp.a\", \"libiphlpapi.a\", \"libkernel32.a\", \"libmsvcrt.a\", \"libodbc32.a\", \"libole32.a\", \"liboleaut32.a\", \"libopengl32.a\", \"libpsapi.a\", \"librpcrt4.a\", \"libsetupapi.a\", \"libshell32.a\", \"libuser32.a\", \"libuuid.a\", \"libwinhttp.a\", \"libwinmm.a\", \"libwinspool.a\", \"libws2_32.a\", \"libwsock32.a\", ] # Find mingw artifacts we want to bundle target_tools = find_files(target_tools, bin_path)", "commid": "rust_pr_18797"}], "negative_passages": []}
{"query_id": "q-en-rust-28bc114395ce3c204b6ca391330a1753c30a2c95379e09e92a3c36c37a321a3c", "query": "Linking may fail when an incompatible installation of MinGW is on the PATH, because rustc ends up running external linker instead of the bundled one. I keep seeing people running into this problem when using WIndows installer. The solution it to run rustc in a console with stripped-down PATH, but this workaround is very nonobvious. I am afraid this will negatively affect perception of quality of our Windows distribution because many early adopters will have some version of mingw already installed on their machines. I think the earlier decision to make rustc prefer bundled binaries () was the correct one. We should un-revert , and fix cargo build instead.\ncc , , rust-lang/cargo\nRegarding cargo, I think we can take one of the following approaches: Use the snapshot compiler. Specify linker path explicitly via . Delete/rename directory after installing. Add a new command line flag to rustc for controlling this behavior. Perhaps something like ? cc\nIssues like are definitely quite worrying, but I would see this as more than just a problem with Cargo but also extending to any project relying on libs installed on windows by some compiler toolchain. The bundled gcc doesn't know anything about system lookup paths, and I see this as akin to saying we should run gcc on unix with no default lookup paths (which would likely be disastrous). I suspect that there are other projects which rely on libs coming from a MinGW, and this seems similar to relying on randomly installed libraries on unix systems, so I'm not sure if it's necessarily a bad thing. Do we know why the error in is occurring? It looks like there's an incompatibility with the MinGW installation, but do we know precisely what it is? I was under the impression that linkers were intended to interoperate well...\nI don't really want gcc to be able to find arbitrary mingw libraries because I don't want Rust users on Windows to depend on mingw (imagine cargo packages that expect mingw libs but only a fraction of windows users actually have it installed). I've misunderstood the implications of this issue before, but my inclination is to do the revert and make the cargo build do the extra work to get at the libs it needs.\nNominating since this issue gets hit pretty often and is the last major usability problem on windows.\nNot quite: mingw gcc has a default library search path built-it, and it is expressed relative to the installation point. So the one we bundle is going to look for libraries only under (unless we supply more via , of course). This is mostly about library compatibility, not about linker. GCC may emit references to library symbols, that may not exist/be buggy in older versions of mingw. This particular one probably happened because the developer had installed mingw with sjlj exception handling, which defines UnwindSjLjResume instead of UnwindResume.\nAssigning P-high, 1.0.\nNot being able to access the full set of mingw libs means that people can't link to random GNU libs (which I'm in favor of). I'm unclear on whether it also prevents people from linking to various windows system libraries (I have this vague idea that mingw includes artifacts that make the linker able to link to e.g. ). Is there any concern there?\ngcc needs to recognize system library (), so picks up . We ship , but we don't ship or so , rust-lang/cargo, and occurs. So, what's the role of bundle? I thought bundle is for providing \"minimal\" linking environment so that beginners can build hello-world without installing mingw-w64. In this scenario, if user want to do more than bundle can provide (e.g. gdi32), user must install mingw-w64. At least, this is current status. Maybe we can bundle ? is 72M.\nPreferring bundle means: if user has old mingw, will not use it. This solves lots of of beginners' issues. , if user really wants to use their own linker, user need to do something. This complicates build story. So my suggestion is 1) prefer bundle and 2) add a flag to not use bundle at all (rather than changing preference). Maybe or ?\non the other hand, there are a few libs that need to be matched to the version of gcc that Rust was built with. It's mostly stuff like and ; possibly also ,. So simply deferring to external linker probably isn't a good idea, unless we ensure that our bundled libs are being found first. I kinda like the idea of shipping the minimal set of import libraries that is sufficient for linking Rust's standard library. We could add a few more common ones like , but everything is a bit much, IMHO. For reference, here's the list of import libraries that D ships with: , , , , , ,, ,, , , , , , , , . Btw, mingw can also link to dlls directly, so in many cases adding to the command line is enough.\nld can also link to dlls directly? Is it possible to #link directly in them in source? #[link(path = \"%systemroot%system32\")] or something like that?\nThat D ships a set of system import libraries is encouraging. Though if mingw can link to dlls directly it's not clear why this is such a problem - if we were to just have rustc add the system32 path by default would this problem with windows import libraries go away completely?", "positive_passages": [{"docid": "doc-en-rust-f3d1b2ee3c688634debb3fdfc86ec3275bb56afec99cd371a51fffade30f8cc5", "text": "shutil.copy(src, dist_bin_dir) # Copy platform tools to platform-specific bin directory target_bin_dir = os.path.join(dist_root, \"bin\", \"rustlib\", target_triple, \"gcc\", \"bin\") target_bin_dir = os.path.join(dist_root, \"bin\", \"rustlib\", target_triple, \"bin\") if not os.path.exists(target_bin_dir): os.makedirs(target_bin_dir) for src in target_tools: shutil.copy(src, target_bin_dir) # Copy platform libs to platform-spcific lib directory target_lib_dir = os.path.join(dist_root, \"bin\", \"rustlib\", target_triple, \"gcc\", \"lib\") target_lib_dir = os.path.join(dist_root, \"bin\", \"rustlib\", target_triple, \"lib\") if not os.path.exists(target_lib_dir): os.makedirs(target_lib_dir) for src in target_libs:", "commid": "rust_pr_18797"}], "negative_passages": []}
{"query_id": "q-en-rust-28bc114395ce3c204b6ca391330a1753c30a2c95379e09e92a3c36c37a321a3c", "query": "Linking may fail when an incompatible installation of MinGW is on the PATH, because rustc ends up running external linker instead of the bundled one. I keep seeing people running into this problem when using WIndows installer. The solution it to run rustc in a console with stripped-down PATH, but this workaround is very nonobvious. I am afraid this will negatively affect perception of quality of our Windows distribution because many early adopters will have some version of mingw already installed on their machines. I think the earlier decision to make rustc prefer bundled binaries () was the correct one. We should un-revert , and fix cargo build instead.\ncc , , rust-lang/cargo\nRegarding cargo, I think we can take one of the following approaches: Use the snapshot compiler. Specify linker path explicitly via . Delete/rename directory after installing. Add a new command line flag to rustc for controlling this behavior. Perhaps something like ? cc\nIssues like are definitely quite worrying, but I would see this as more than just a problem with Cargo but also extending to any project relying on libs installed on windows by some compiler toolchain. The bundled gcc doesn't know anything about system lookup paths, and I see this as akin to saying we should run gcc on unix with no default lookup paths (which would likely be disastrous). I suspect that there are other projects which rely on libs coming from a MinGW, and this seems similar to relying on randomly installed libraries on unix systems, so I'm not sure if it's necessarily a bad thing. Do we know why the error in is occurring? It looks like there's an incompatibility with the MinGW installation, but do we know precisely what it is? I was under the impression that linkers were intended to interoperate well...\nI don't really want gcc to be able to find arbitrary mingw libraries because I don't want Rust users on Windows to depend on mingw (imagine cargo packages that expect mingw libs but only a fraction of windows users actually have it installed). I've misunderstood the implications of this issue before, but my inclination is to do the revert and make the cargo build do the extra work to get at the libs it needs.\nNominating since this issue gets hit pretty often and is the last major usability problem on windows.\nNot quite: mingw gcc has a default library search path built-it, and it is expressed relative to the installation point. So the one we bundle is going to look for libraries only under (unless we supply more via , of course). This is mostly about library compatibility, not about linker. GCC may emit references to library symbols, that may not exist/be buggy in older versions of mingw. This particular one probably happened because the developer had installed mingw with sjlj exception handling, which defines UnwindSjLjResume instead of UnwindResume.\nAssigning P-high, 1.0.\nNot being able to access the full set of mingw libs means that people can't link to random GNU libs (which I'm in favor of). I'm unclear on whether it also prevents people from linking to various windows system libraries (I have this vague idea that mingw includes artifacts that make the linker able to link to e.g. ). Is there any concern there?\ngcc needs to recognize system library (), so picks up . We ship , but we don't ship or so , rust-lang/cargo, and occurs. So, what's the role of bundle? I thought bundle is for providing \"minimal\" linking environment so that beginners can build hello-world without installing mingw-w64. In this scenario, if user want to do more than bundle can provide (e.g. gdi32), user must install mingw-w64. At least, this is current status. Maybe we can bundle ? is 72M.\nPreferring bundle means: if user has old mingw, will not use it. This solves lots of of beginners' issues. , if user really wants to use their own linker, user need to do something. This complicates build story. So my suggestion is 1) prefer bundle and 2) add a flag to not use bundle at all (rather than changing preference). Maybe or ?\non the other hand, there are a few libs that need to be matched to the version of gcc that Rust was built with. It's mostly stuff like and ; possibly also ,. So simply deferring to external linker probably isn't a good idea, unless we ensure that our bundled libs are being found first. I kinda like the idea of shipping the minimal set of import libraries that is sufficient for linking Rust's standard library. We could add a few more common ones like , but everything is a bit much, IMHO. For reference, here's the list of import libraries that D ships with: , , , , , ,, ,, , , , , , , , . Btw, mingw can also link to dlls directly, so in many cases adding to the command line is enough.\nld can also link to dlls directly? Is it possible to #link directly in them in source? #[link(path = \"%systemroot%system32\")] or something like that?\nThat D ships a set of system import libraries is encouraging. Though if mingw can link to dlls directly it's not clear why this is such a problem - if we were to just have rustc add the system32 path by default would this problem with windows import libraries go away completely?", "positive_passages": [{"docid": "doc-en-rust-edfcbb80f35184875d93829291391d692c0720b88643782c304fe5053aecbfc3", "text": "cmd.arg(obj_filename.with_extension(\"metadata.o\")); } // Rust does its' own LTO cmd.arg(\"-fno-lto\"); if t.options.is_like_osx { // The dead_strip option to the linker specifies that functions and data // unreachable by the entry point will be removed. This is quite useful", "commid": "rust_pr_18797"}], "negative_passages": []}
{"query_id": "q-en-rust-28bc114395ce3c204b6ca391330a1753c30a2c95379e09e92a3c36c37a321a3c", "query": "Linking may fail when an incompatible installation of MinGW is on the PATH, because rustc ends up running external linker instead of the bundled one. I keep seeing people running into this problem when using WIndows installer. The solution it to run rustc in a console with stripped-down PATH, but this workaround is very nonobvious. I am afraid this will negatively affect perception of quality of our Windows distribution because many early adopters will have some version of mingw already installed on their machines. I think the earlier decision to make rustc prefer bundled binaries () was the correct one. We should un-revert , and fix cargo build instead.\ncc , , rust-lang/cargo\nRegarding cargo, I think we can take one of the following approaches: Use the snapshot compiler. Specify linker path explicitly via . Delete/rename directory after installing. Add a new command line flag to rustc for controlling this behavior. Perhaps something like ? cc\nIssues like are definitely quite worrying, but I would see this as more than just a problem with Cargo but also extending to any project relying on libs installed on windows by some compiler toolchain. The bundled gcc doesn't know anything about system lookup paths, and I see this as akin to saying we should run gcc on unix with no default lookup paths (which would likely be disastrous). I suspect that there are other projects which rely on libs coming from a MinGW, and this seems similar to relying on randomly installed libraries on unix systems, so I'm not sure if it's necessarily a bad thing. Do we know why the error in is occurring? It looks like there's an incompatibility with the MinGW installation, but do we know precisely what it is? I was under the impression that linkers were intended to interoperate well...\nI don't really want gcc to be able to find arbitrary mingw libraries because I don't want Rust users on Windows to depend on mingw (imagine cargo packages that expect mingw libs but only a fraction of windows users actually have it installed). I've misunderstood the implications of this issue before, but my inclination is to do the revert and make the cargo build do the extra work to get at the libs it needs.\nNominating since this issue gets hit pretty often and is the last major usability problem on windows.\nNot quite: mingw gcc has a default library search path built-it, and it is expressed relative to the installation point. So the one we bundle is going to look for libraries only under (unless we supply more via , of course). This is mostly about library compatibility, not about linker. GCC may emit references to library symbols, that may not exist/be buggy in older versions of mingw. This particular one probably happened because the developer had installed mingw with sjlj exception handling, which defines UnwindSjLjResume instead of UnwindResume.\nAssigning P-high, 1.0.\nNot being able to access the full set of mingw libs means that people can't link to random GNU libs (which I'm in favor of). I'm unclear on whether it also prevents people from linking to various windows system libraries (I have this vague idea that mingw includes artifacts that make the linker able to link to e.g. ). Is there any concern there?\ngcc needs to recognize system library (), so picks up . We ship , but we don't ship or so , rust-lang/cargo, and occurs. So, what's the role of bundle? I thought bundle is for providing \"minimal\" linking environment so that beginners can build hello-world without installing mingw-w64. In this scenario, if user want to do more than bundle can provide (e.g. gdi32), user must install mingw-w64. At least, this is current status. Maybe we can bundle ? is 72M.\nPreferring bundle means: if user has old mingw, will not use it. This solves lots of of beginners' issues. , if user really wants to use their own linker, user need to do something. This complicates build story. So my suggestion is 1) prefer bundle and 2) add a flag to not use bundle at all (rather than changing preference). Maybe or ?\non the other hand, there are a few libs that need to be matched to the version of gcc that Rust was built with. It's mostly stuff like and ; possibly also ,. So simply deferring to external linker probably isn't a good idea, unless we ensure that our bundled libs are being found first. I kinda like the idea of shipping the minimal set of import libraries that is sufficient for linking Rust's standard library. We could add a few more common ones like , but everything is a bit much, IMHO. For reference, here's the list of import libraries that D ships with: , , , , , ,, ,, , , , , , , , . Btw, mingw can also link to dlls directly, so in many cases adding to the command line is enough.\nld can also link to dlls directly? Is it possible to #link directly in them in source? #[link(path = \"%systemroot%system32\")] or something like that?\nThat D ships a set of system import libraries is encouraging. Though if mingw can link to dlls directly it's not clear why this is such a problem - if we were to just have rustc add the system32 path by default would this problem with windows import libraries go away completely?", "positive_passages": [{"docid": "doc-en-rust-29d8a8c9b4820cddb7e2682607c23d70ffd9d905c299846a9497236e348c69ad", "text": "trans: &CrateTranslation, outputs: &OutputFilenames) { let old_path = os::getenv(\"PATH\").unwrap_or_else(||String::new()); let mut new_path = os::split_paths(old_path.as_slice()); new_path.extend(sess.host_filesearch().get_tools_search_paths().into_iter()); let mut new_path = sess.host_filesearch().get_tools_search_paths(); new_path.extend(os::split_paths(old_path.as_slice()).into_iter()); os::setenv(\"PATH\", os::join_paths(new_path.as_slice()).unwrap()); time(sess.time_passes(), \"linking\", (), |_|", "commid": "rust_pr_18797"}], "negative_passages": []}
{"query_id": "q-en-rust-28bc114395ce3c204b6ca391330a1753c30a2c95379e09e92a3c36c37a321a3c", "query": "Linking may fail when an incompatible installation of MinGW is on the PATH, because rustc ends up running external linker instead of the bundled one. I keep seeing people running into this problem when using WIndows installer. The solution it to run rustc in a console with stripped-down PATH, but this workaround is very nonobvious. I am afraid this will negatively affect perception of quality of our Windows distribution because many early adopters will have some version of mingw already installed on their machines. I think the earlier decision to make rustc prefer bundled binaries () was the correct one. We should un-revert , and fix cargo build instead.\ncc , , rust-lang/cargo\nRegarding cargo, I think we can take one of the following approaches: Use the snapshot compiler. Specify linker path explicitly via . Delete/rename directory after installing. Add a new command line flag to rustc for controlling this behavior. Perhaps something like ? cc\nIssues like are definitely quite worrying, but I would see this as more than just a problem with Cargo but also extending to any project relying on libs installed on windows by some compiler toolchain. The bundled gcc doesn't know anything about system lookup paths, and I see this as akin to saying we should run gcc on unix with no default lookup paths (which would likely be disastrous). I suspect that there are other projects which rely on libs coming from a MinGW, and this seems similar to relying on randomly installed libraries on unix systems, so I'm not sure if it's necessarily a bad thing. Do we know why the error in is occurring? It looks like there's an incompatibility with the MinGW installation, but do we know precisely what it is? I was under the impression that linkers were intended to interoperate well...\nI don't really want gcc to be able to find arbitrary mingw libraries because I don't want Rust users on Windows to depend on mingw (imagine cargo packages that expect mingw libs but only a fraction of windows users actually have it installed). I've misunderstood the implications of this issue before, but my inclination is to do the revert and make the cargo build do the extra work to get at the libs it needs.\nNominating since this issue gets hit pretty often and is the last major usability problem on windows.\nNot quite: mingw gcc has a default library search path built-it, and it is expressed relative to the installation point. So the one we bundle is going to look for libraries only under (unless we supply more via , of course). This is mostly about library compatibility, not about linker. GCC may emit references to library symbols, that may not exist/be buggy in older versions of mingw. This particular one probably happened because the developer had installed mingw with sjlj exception handling, which defines UnwindSjLjResume instead of UnwindResume.\nAssigning P-high, 1.0.\nNot being able to access the full set of mingw libs means that people can't link to random GNU libs (which I'm in favor of). I'm unclear on whether it also prevents people from linking to various windows system libraries (I have this vague idea that mingw includes artifacts that make the linker able to link to e.g. ). Is there any concern there?\ngcc needs to recognize system library (), so picks up . We ship , but we don't ship or so , rust-lang/cargo, and occurs. So, what's the role of bundle? I thought bundle is for providing \"minimal\" linking environment so that beginners can build hello-world without installing mingw-w64. In this scenario, if user want to do more than bundle can provide (e.g. gdi32), user must install mingw-w64. At least, this is current status. Maybe we can bundle ? is 72M.\nPreferring bundle means: if user has old mingw, will not use it. This solves lots of of beginners' issues. , if user really wants to use their own linker, user need to do something. This complicates build story. So my suggestion is 1) prefer bundle and 2) add a flag to not use bundle at all (rather than changing preference). Maybe or ?\non the other hand, there are a few libs that need to be matched to the version of gcc that Rust was built with. It's mostly stuff like and ; possibly also ,. So simply deferring to external linker probably isn't a good idea, unless we ensure that our bundled libs are being found first. I kinda like the idea of shipping the minimal set of import libraries that is sufficient for linking Rust's standard library. We could add a few more common ones like , but everything is a bit much, IMHO. For reference, here's the list of import libraries that D ships with: , , , , , ,, ,, , , , , , , , . Btw, mingw can also link to dlls directly, so in many cases adding to the command line is enough.\nld can also link to dlls directly? Is it possible to #link directly in them in source? #[link(path = \"%systemroot%system32\")] or something like that?\nThat D ships a set of system import libraries is encouraging. Though if mingw can link to dlls directly it's not clear why this is such a problem - if we were to just have rustc add the system32 path by default would this problem with windows import libraries go away completely?", "positive_passages": [{"docid": "doc-en-rust-7e07e3b4132822e8a2c7fa4748fc7d92a65fd3a478956874acae98ac3fae5de3", "text": "p.push(find_libdir(self.sysroot)); p.push(rustlibdir()); p.push(self.triple); let mut p1 = p.clone(); p1.push(\"bin\"); let mut p2 = p.clone(); p2.push(\"gcc\"); p2.push(\"bin\"); vec![p1, p2] p.push(\"bin\"); vec![p] } }", "commid": "rust_pr_18797"}], "negative_passages": []}
{"query_id": "q-en-rust-02bc8dabc7d8997f6eccaac1b7159290b362b0fedce53008990455f3efd75625", "query": "The following causes an LLVM assertion failure: Removing the panic! prevents the assertion failure.\nI minimized it some more: The Option use rustc::ty; use rustc::ty::{self, Ty}; use rustc::ty::layout::{Align, LayoutOf}; use rustc::hir::{self, CodegenFnAttrFlags}; use rustc::hir::{self, CodegenFnAttrs, CodegenFnAttrFlags}; use std::ffi::{CStr, CString};", "commid": "rust_pr_52635"}], "negative_passages": []}
{"query_id": "q-en-rust-84928dfd6fe4708e73d371a7088e17c27c31483dfa3506634d725091306ed23e", "query": "For example: This can be a little more clearly seen with the IR generated for main: Note the lack of anywhere on .\nTriage: this is still true today, though needs to be updated to , and there's now a warning about not being a valid foreign type, which is irrelevant to this particular issue.\nclaims it repros still.\nUpdated test code: Still fails to compile with the same errors mentioned in the top post along with the irrelevant invalid foreign type warning mentioned by", "positive_passages": [{"docid": "doc-en-rust-a2b792b6c02802e532211ebd80563bb5f432526afa16c814eaf6a391c1753aad", "text": "let ty = instance.ty(cx.tcx); let sym = cx.tcx.symbol_name(instance).as_str(); debug!(\"get_static: sym={} instance={:?}\", sym, instance); let g = if let Some(id) = cx.tcx.hir.as_local_node_id(def_id) { let llty = cx.layout_of(ty).llvm_type(cx);", "commid": "rust_pr_52635"}], "negative_passages": []}
{"query_id": "q-en-rust-84928dfd6fe4708e73d371a7088e17c27c31483dfa3506634d725091306ed23e", "query": "For example: This can be a little more clearly seen with the IR generated for main: Note the lack of anywhere on .\nTriage: this is still true today, though needs to be updated to , and there's now a warning about not being a valid foreign type, which is irrelevant to this particular issue.\nclaims it repros still.\nUpdated test code: Still fails to compile with the same errors mentioned in the top post along with the irrelevant invalid foreign type warning mentioned by", "positive_passages": [{"docid": "doc-en-rust-318b3ac5f2ec07849df1c79698c379f480406bae59be166603870eea318c8274", "text": "hir_map::NodeForeignItem(&hir::ForeignItem { ref attrs, span, node: hir::ForeignItemKind::Static(..), .. }) => { let g = if let Some(linkage) = cx.tcx.codegen_fn_attrs(def_id).linkage { // If this is a static with a linkage specified, then we need to handle // it a little specially. The typesystem prevents things like &T and // extern \"C\" fn() from being non-null, so we can't just declare a // static and call it a day. Some linkages (like weak) will make it such // that the static actually has a null value. let llty2 = match ty.sty { ty::TyRawPtr(ref mt) => cx.layout_of(mt.ty).llvm_type(cx), _ => { cx.sess().span_fatal(span, \"must have type `*const T` or `*mut T`\"); } }; unsafe { // Declare a symbol `foo` with the desired linkage. let g1 = declare::declare_global(cx, &sym, llty2); llvm::LLVMRustSetLinkage(g1, base::linkage_to_llvm(linkage)); // Declare an internal global `extern_with_linkage_foo` which // is initialized with the address of `foo`. If `foo` is // discarded during linking (for example, if `foo` has weak // linkage and there are no definitions), then // `extern_with_linkage_foo` will instead be initialized to // zero. let mut real_name = \"_rust_extern_with_linkage_\".to_string(); real_name.push_str(&sym); let g2 = declare::define_global(cx, &real_name, llty).unwrap_or_else(||{ cx.sess().span_fatal(span, &format!(\"symbol `{}` is already defined\", &sym)) }); llvm::LLVMRustSetLinkage(g2, llvm::Linkage::InternalLinkage); llvm::LLVMSetInitializer(g2, g1); g2 } } else { // Generate an external declaration. declare::declare_global(cx, &sym, llty) }; (g, attrs) let fn_attrs = cx.tcx.codegen_fn_attrs(def_id); (check_and_apply_linkage(cx, &fn_attrs, ty, sym, Some(span)), attrs) } item => bug!(\"get_static: expected static, found {:?}\", item) }; debug!(\"get_static: sym={} attrs={:?}\", sym, attrs); for attr in attrs { if attr.check_name(\"thread_local\") { llvm::set_thread_local_mode(g, cx.tls_model);", "commid": "rust_pr_52635"}], "negative_passages": []}
{"query_id": "q-en-rust-84928dfd6fe4708e73d371a7088e17c27c31483dfa3506634d725091306ed23e", "query": "For example: This can be a little more clearly seen with the IR generated for main: Note the lack of anywhere on .\nTriage: this is still true today, though needs to be updated to , and there's now a warning about not being a valid foreign type, which is irrelevant to this particular issue.\nclaims it repros still.\nUpdated test code: Still fails to compile with the same errors mentioned in the top post along with the irrelevant invalid foreign type warning mentioned by", "positive_passages": [{"docid": "doc-en-rust-68bc26f3d58da61ad193376712dd0cf98554e5aed8baa00b65890d6fffcb44df", "text": "g } else { // FIXME(nagisa): perhaps the map of externs could be offloaded to llvm somehow? // FIXME(nagisa): investigate whether it can be changed into define_global let g = declare::declare_global(cx, &sym, cx.layout_of(ty).llvm_type(cx)); debug!(\"get_static: sym={} item_attr={:?}\", sym, cx.tcx.item_attrs(def_id)); let attrs = cx.tcx.codegen_fn_attrs(def_id); let g = check_and_apply_linkage(cx, &attrs, ty, sym, None); // Thread-local statics in some other crate need to *always* be linked // against in a thread-local fashion, so we need to be sure to apply the // thread-local attribute locally if it was present remotely. If we // don't do this then linker errors can be generated where the linker // complains that one object files has a thread local version of the // symbol and another one doesn't. for attr in cx.tcx.get_attrs(def_id).iter() { if attr.check_name(\"thread_local\") { llvm::set_thread_local_mode(g, cx.tls_model); } if attrs.flags.contains(CodegenFnAttrFlags::THREAD_LOCAL) { llvm::set_thread_local_mode(g, cx.tls_model); } if cx.use_dll_storage_attrs && !cx.tcx.is_foreign_item(def_id) { // This item is external but not foreign, i.e. it originates from an external Rust // crate. Since we don't know whether this crate will be linked dynamically or", "commid": "rust_pr_52635"}], "negative_passages": []}
{"query_id": "q-en-rust-84928dfd6fe4708e73d371a7088e17c27c31483dfa3506634d725091306ed23e", "query": "For example: This can be a little more clearly seen with the IR generated for main: Note the lack of anywhere on .\nTriage: this is still true today, though needs to be updated to , and there's now a warning about not being a valid foreign type, which is irrelevant to this particular issue.\nclaims it repros still.\nUpdated test code: Still fails to compile with the same errors mentioned in the top post along with the irrelevant invalid foreign type warning mentioned by", "positive_passages": [{"docid": "doc-en-rust-ffdea37f271aa115c8be581deff56497e651f233aaa307a965a4af1cf39ad956", "text": "g } fn check_and_apply_linkage<'tcx>( cx: &CodegenCx<'_, 'tcx>, attrs: &CodegenFnAttrs, ty: Ty<'tcx>, sym: LocalInternedString, span: Option ) -> ValueRef { let llty = cx.layout_of(ty).llvm_type(cx); if let Some(linkage) = attrs.linkage { debug!(\"get_static: sym={} linkage={:?}\", sym, linkage); // If this is a static with a linkage specified, then we need to handle // it a little specially. The typesystem prevents things like &T and // extern \"C\" fn() from being non-null, so we can't just declare a // static and call it a day. Some linkages (like weak) will make it such // that the static actually has a null value. let llty2 = match ty.sty { ty::TyRawPtr(ref mt) => cx.layout_of(mt.ty).llvm_type(cx), _ => { if span.is_some() { cx.sess().span_fatal(span.unwrap(), \"must have type `*const T` or `*mut T`\") } else { bug!(\"must have type `*const T` or `*mut T`\") } } }; unsafe { // Declare a symbol `foo` with the desired linkage. let g1 = declare::declare_global(cx, &sym, llty2); llvm::LLVMRustSetLinkage(g1, base::linkage_to_llvm(linkage)); // Declare an internal global `extern_with_linkage_foo` which // is initialized with the address of `foo`. If `foo` is // discarded during linking (for example, if `foo` has weak // linkage and there are no definitions), then // `extern_with_linkage_foo` will instead be initialized to // zero. let mut real_name = \"_rust_extern_with_linkage_\".to_string(); real_name.push_str(&sym); let g2 = declare::define_global(cx, &real_name, llty).unwrap_or_else(||{ if span.is_some() { cx.sess().span_fatal( span.unwrap(), &format!(\"symbol `{}` is already defined\", &sym) ) } else { bug!(\"symbol `{}` is already defined\", &sym) } }); llvm::LLVMRustSetLinkage(g2, llvm::Linkage::InternalLinkage); llvm::LLVMSetInitializer(g2, g1); g2 } } else { // Generate an external declaration. // FIXME(nagisa): investigate whether it can be changed into define_global declare::declare_global(cx, &sym, llty) } } pub fn codegen_static<'a, 'tcx>( cx: &CodegenCx<'a, 'tcx>, def_id: DefId,", "commid": "rust_pr_52635"}], "negative_passages": []}
{"query_id": "q-en-rust-84928dfd6fe4708e73d371a7088e17c27c31483dfa3506634d725091306ed23e", "query": "For example: This can be a little more clearly seen with the IR generated for main: Note the lack of anywhere on .\nTriage: this is still true today, though needs to be updated to , and there's now a warning about not being a valid foreign type, which is irrelevant to this particular issue.\nclaims it repros still.\nUpdated test code: Still fails to compile with the same errors mentioned in the top post along with the irrelevant invalid foreign type warning mentioned by", "positive_passages": [{"docid": "doc-en-rust-567c2b5d02a723fbedef3eaa51665a4b6afc0ac1083bb3915a6354d86ce4cf4c", "text": " // Copyright 2018 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0 fn blank<'a>(s: Option<&'a str>) -> &'a str { match s { Some(s) => s, None => \"\" } } fn shorter<'a>(s: Option<&'a str>) -> &'a str { match s { Some(s) => match s.find_str(\"nn\") {", "commid": "rust_pr_19234"}], "negative_passages": []}
{"query_id": "q-en-rust-b5fbe2780d1c918d212d3fb3594848d807c1288493fc3aa8bd051cfa2b2893e5", "query": "Here's what I think is wrong: keywords , and appear next to the name of the constant, when clearly they are redundant. the constants page through search brings the user to an empty page. value assigned to the constant appears along side its name. When reading the documentation of constants, the most important information is the name of the constant and its description. Having the declaration appear inline makes scanning for the name of harder. Better output would replace the declaration with the name of the constant followed by its type. Also it would move the declaration of the constant to the empty page mentioned in point 2 and have the identifier of the constant link to this page. All these points are relevant for statics as well. Example of ugly output: ! This page should show the definition and a description of the constant: !", "positive_passages": [{"docid": "doc-en-rust-fe44281601ef46cdad99ffe32b59b5f531ae4cea381950e28eacc3750478bb68", "text": "id = short, name = name)); } struct Initializer<'a>(&'a str, Item<'a>); impl<'a> fmt::Show for Initializer<'a> { fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { let Initializer(s, item) = *self; if s.len() == 0 { return Ok(()); } try!(write!(f, \" match myitem.inner { = \")); if s.contains(\"n\") { match item.href() { Some(url) => { write!(f, \"[definition]\", url) } None => Ok(()), } } else { write!(f, \"{}\", s.as_slice()) } } } clean::StaticItem(ref s) | clean::ForeignStaticItem(ref s) => { try!(write!(w, \" clean::ViewItemItem(ref item) => { match item.inner { clean::ExternCrate(ref name, ref src, _) => {", "commid": "rust_pr_19234"}], "negative_passages": []}
{"query_id": "q-en-rust-b5fbe2780d1c918d212d3fb3594848d807c1288493fc3aa8bd051cfa2b2893e5", "query": "Here's what I think is wrong: keywords , and appear next to the name of the constant, when clearly they are redundant. the constants page through search brings the user to an empty page. value assigned to the constant appears along side its name. When reading the documentation of constants, the most important information is the name of the constant and its description. Having the declaration appear inline makes scanning for the name of harder. Better output would replace the declaration with the name of the constant followed by its type. Also it would move the declaration of the constant to the empty page mentioned in point 2 and have the identifier of the constant link to this page. All these points are relevant for statics as well. Example of ugly output: ! This page should show the definition and a description of the constant: !", "positive_passages": [{"docid": "doc-en-rust-a2df27e29da610839f330476dea2be54d786d608ea961e628767a93fccebb29e", "text": "write!(w, \"\") } struct Initializer<'a>(&'a str); impl<'a> fmt::Show for Initializer<'a> { fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { let Initializer(s) = *self; if s.len() == 0 { return Ok(()); } try!(write!(f, \" \", ConciseStability(&myitem.stability), VisSpace(myitem.visibility), MutableSpace(s.mutability), *myitem.name.as_ref().unwrap(), s.type_, Initializer(s.expr.as_slice(), Item { cx: cx, item: myitem }), Markdown(blank(myitem.doc_value())))); } clean::ConstantItem(ref s) => { try!(write!(w, \" {} {}static {}{}: {}{}{} \", ConciseStability(&myitem.stability), VisSpace(myitem.visibility), *myitem.name.as_ref().unwrap(), s.type_, Initializer(s.expr.as_slice(), Item { cx: cx, item: myitem }), Markdown(blank(myitem.doc_value())))); } {} {}const {}: {}{}{} = \")); write!(f, \"{}\", s.as_slice()) } } fn item_constant(w: &mut fmt::Formatter, it: &clean::Item, c: &clean::Constant) -> fmt::Result { try!(write!(w, \"{vis}const {name}: {typ}{init}\", vis = VisSpace(it.visibility), name = it.name.as_ref().unwrap().as_slice(), typ = c.type_, init = Initializer(c.expr.as_slice()))); document(w, it) } fn item_static(w: &mut fmt::Formatter, it: &clean::Item, s: &clean::Static) -> fmt::Result { try!(write!(w, \"{vis}static {mutability} {name}: {typ}{init}\", vis = VisSpace(it.visibility), mutability = MutableSpace(s.mutability), name = it.name.as_ref().unwrap().as_slice(), typ = s.type_, init = Initializer(s.expr.as_slice()))); document(w, it) } fn item_function(w: &mut fmt::Formatter, it: &clean::Item, f: &clean::Function) -> fmt::Result { try!(write!(w, \"{vis}{fn_style}fn ", "commid": "rust_pr_19234"}], "negative_passages": []}
{"query_id": "q-en-rust-88465e69fce7874164bd433141c7fe472f9f27c2dade7cbf92125b5006492558", "query": "The documentation is misleading. The pointer is not allowed to be null.\nHow do you make an empty slice if the pointer is not allowed to be null?\nThe underlying representation is an implementation detail and shouldn't be documented beyond the ability to convert the raw parts obtained from a vector back into a vector.\nThe language and library documentation has a love affair with making far too many promises about the implementation. In the vector module, there are numerous errors when it comes to information about the vector's capacity too. It tends to guarantee that the capacity is exactly what was asked for rather than at least that much. It is allowed to set the capacity to a value provided by the allocator.\nDoes that mean you are required to special case the empty case and then use a different way to make an empty vector?\nNo, it means you can't use this method outside of the standard library beyond converting from an existing vector.\nIf what you're asking is how it does this internally, the answer is that empty vectors along with zero-size allocations in general are allowed to be entirely arbitrary pointers. They will never be dereferenced and the compiler / library code will never attempt to deallocate them.\nThe documentation is not supposed to cover these implementation details. It would make sense to cover it in comments (which it is) or internal design documentation.\nIn that case I suppose the docs should say: can only be constructed from components of an already existing vector.", "positive_passages": [{"docid": "doc-en-rust-4a95538432201e87ca988f2705cfd4f08e33471577528b2d1030d031abc4142c", "text": "} } /// Creates a `Vec /// Creates a `Vec /// This is highly unsafe: /// /// - if `ptr` is null, then `length` and `capacity` should be 0 /// - `ptr` must point to an allocation of size `capacity` /// - there must be `length` valid instances of type `T` at the /// beginning of that allocation /// - `ptr` must be allocated by the default `Vec` allocator /// This is highly unsafe, due to the number of invariants that aren't checked. /// /// # Example ///", "commid": "rust_pr_19306"}], "negative_passages": []}
{"query_id": "q-en-rust-5f5507094586d90ba2ca78024e2b029d385771dbc83fd987403c3f5f8c30699a", "query": "cc\nIt seems like the second part of this (type parameter shadowing) is backwards incompatible, but AFAICT, isn't implemented yet.\nI suggest we leave it and go straight for the lint (as we plan to do eventually with the lifetime version). Then no backwards compatibility issues. Also, since type shadowing is allowed in every other language it is going to really surprise people if it is forbidden in Rust. And, it is the shadowed lifetimes which were causing the motivating confusion.\njust for future visitors, type parameter shadowing (like the example in the RFC) does, in fact, produce a clear compiler error. The comments above and lack of linked \"type parameter shadowing\" PRs might otherwise suggest that type parameter shadowing was left as a lint rather than a compiler error.", "positive_passages": [{"docid": "doc-en-rust-89777ca1a307a97e6d350cb970476e70632cc4f0634805ba04e43f9edbaa787d", "text": "use syntax::ast_util::{local_def}; use syntax::attr; use syntax::codemap::Span; use syntax::parse::token; use syntax::visit; use syntax::visit::Visitor;", "commid": "rust_pr_20728"}], "negative_passages": []}
{"query_id": "q-en-rust-5f5507094586d90ba2ca78024e2b029d385771dbc83fd987403c3f5f8c30699a", "query": "cc\nIt seems like the second part of this (type parameter shadowing) is backwards incompatible, but AFAICT, isn't implemented yet.\nI suggest we leave it and go straight for the lint (as we plan to do eventually with the lifetime version). Then no backwards compatibility issues. Also, since type shadowing is allowed in every other language it is going to really surprise people if it is forbidden in Rust. And, it is the shadowed lifetimes which were causing the motivating confusion.\njust for future visitors, type parameter shadowing (like the example in the RFC) does, in fact, produce a clear compiler error. The comments above and lack of linked \"type parameter shadowing\" PRs might otherwise suggest that type parameter shadowing was left as a lint rather than a compiler error.", "positive_passages": [{"docid": "doc-en-rust-456eecb7eec26a55daa5bcfafe37bb7ca7ca7bb974d611e8353c924e21bbd65a", "text": "} } fn reject_shadowing_type_parameters<'tcx>(tcx: &ty::ctxt<'tcx>, span: Span, generics: &ty::Generics<'tcx>) { let impl_params = generics.types.get_slice(subst::TypeSpace).iter() .map(|tp| tp.name).collect:: fn error fn error(&self, reason: ErrorCode) -> Result { Err(SyntaxError(reason, self.line, self.col)) }", "commid": "rust_pr_20728"}], "negative_passages": []}
{"query_id": "q-en-rust-5f5507094586d90ba2ca78024e2b029d385771dbc83fd987403c3f5f8c30699a", "query": "cc\nIt seems like the second part of this (type parameter shadowing) is backwards incompatible, but AFAICT, isn't implemented yet.\nI suggest we leave it and go straight for the lint (as we plan to do eventually with the lifetime version). Then no backwards compatibility issues. Also, since type shadowing is allowed in every other language it is going to really surprise people if it is forbidden in Rust. And, it is the shadowed lifetimes which were causing the motivating confusion.\njust for future visitors, type parameter shadowing (like the example in the RFC) does, in fact, produce a clear compiler error. The comments above and lack of linked \"type parameter shadowing\" PRs might otherwise suggest that type parameter shadowing was left as a lint rather than a compiler error.", "positive_passages": [{"docid": "doc-en-rust-e1f57f4a672b64e03b4d7a90973274aec453430b32451bbd1709d96c224a3617", "text": " // Copyright 2013 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0 // FIXME(#20378) assoc type normalization here? // Erase any late-bound regions bound in the impl // which appear in the bounds. let impl_bounds = self.erase_late_bound_regions(&ty::Binder(impl_bounds)); let traits::Normalized { value: impl_bounds, obligations: norm_obligations } = traits::normalize(selcx, cause.clone(), &impl_bounds); // Convert the bounds into obligations. let obligations = traits::predicates_for_generics( self.tcx(), traits::ObligationCause::misc(self.span, self.fcx.body_id), &impl_bounds); traits::predicates_for_generics(self.tcx(), cause.clone(), &impl_bounds); debug!(\"impl_obligations={}\", obligations.repr(self.tcx())); // Evaluate those obligations to see if they might possibly hold. let mut selcx = traits::SelectionContext::new(self.infcx(), self.fcx); obligations.all(|o| selcx.evaluate_obligation(o)) obligations.all(|o| selcx.evaluate_obligation(o)) && norm_obligations.iter().all(|o| selcx.evaluate_obligation(o)) } ObjectCandidate(..) |", "commid": "rust_pr_20608"}], "negative_passages": []}
{"query_id": "q-en-rust-271e5254a2e8191ad58ca6872f59048573a6bb5d7abe050059f6847f5e72f7d8", "query": "Right now we don't normalize impl bounds during method probing, which means that the winnow stage may not work great when those bounds involve associated types. I think with the new setup we should be able to do this but didn't have time to test. Search for the FIXME in", "positive_passages": [{"docid": "doc-en-rust-528025b5d28509a0972e8631772b2a319994794a7aa43e9734b4cdb77d7990cc", "text": " // Copyright 2015 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0 { s: S, } impl Map where S: HashState, ::Wut: Hasher //! in `T` - a vtable or a length (or `()` if `T: Sized`). //! in `T` - the vtable for a trait definition (e.g. `fmt::Display` or //! `Iterator`, not `Iterator Vtable, Vtable(ast::DefId), Length, /// The unsize info of this projection OfProjection(&'tcx ty::ProjectionTy<'tcx>),", "commid": "rust_pr_26394"}], "negative_passages": []}
{"query_id": "q-en-rust-d4a593b434e97fdf75553db892a4a97b6111c11da0bceb895600abdb30f17120", "query": "Much of the implementation work for rust-lang/rfcs has been done, but there remain some corner cases to fix: [ ] Casts between raw pointers to different traits are accepted but should not be (e.g. ) [ ] Double-check other cases, ensure adequate tests exist\ntriage: P-high -- I'm giving this high priority since accepting casts that we ought not to is a backwards compatibility risk, though it is mitigated by the fact that afaik the illegal casts yield an ICE (but I haven't done exhaustive testing)\nThe newly-illegal casts shouldn't be ICE-s (I mean, on nightly). Casts on stable are just randomly ICE-ey.", "positive_passages": [{"docid": "doc-en-rust-43d6176dfd60f671c5d92f18a135aa25484240480aa8f2f15fe3586a19084fe4", "text": "-> Option ty::TyTrait(_) => Some(UnsizeKind::Vtable), ty::TyTrait(ref tty) => Some(UnsizeKind::Vtable(tty.principal_def_id())), ty::TyStruct(did, substs) => { match ty::struct_fields(fcx.tcx(), did, substs).pop() { None => None,", "commid": "rust_pr_26394"}], "negative_passages": []}
{"query_id": "q-en-rust-d4a593b434e97fdf75553db892a4a97b6111c11da0bceb895600abdb30f17120", "query": "Much of the implementation work for rust-lang/rfcs has been done, but there remain some corner cases to fix: [ ] Casts between raw pointers to different traits are accepted but should not be (e.g. ) [ ] Double-check other cases, ensure adequate tests exist\ntriage: P-high -- I'm giving this high priority since accepting casts that we ought not to is a backwards compatibility risk, though it is mitigated by the fact that afaik the illegal casts yield an ICE (but I haven't done exhaustive testing)\nThe newly-illegal casts shouldn't be ICE-s (I mean, on nightly). Casts on stable are just randomly ICE-ey.", "positive_passages": [{"docid": "doc-en-rust-3c9f20479e4729f2c52d0e6acfff91eba74fc7076bdd18c15458648cf7f46b46", "text": "trait Foo { fn foo(&self) {} } impl unsafe fn fool<'a>(t: *const (Foo unsafe fn round_trip_and_call<'a>(t: *const (Foo (&*r_1).foo(0)*(&*(bar as *const Foo (&*r_1).foo(0) } #[repr(C)]", "commid": "rust_pr_26394"}], "negative_passages": []}
{"query_id": "q-en-rust-d4a593b434e97fdf75553db892a4a97b6111c11da0bceb895600abdb30f17120", "query": "Much of the implementation work for rust-lang/rfcs has been done, but there remain some corner cases to fix: [ ] Casts between raw pointers to different traits are accepted but should not be (e.g. ) [ ] Double-check other cases, ensure adequate tests exist\ntriage: P-high -- I'm giving this high priority since accepting casts that we ought not to is a backwards compatibility risk, though it is mitigated by the fact that afaik the illegal casts yield an ICE (but I haven't done exhaustive testing)\nThe newly-illegal casts shouldn't be ICE-s (I mean, on nightly). Casts on stable are just randomly ICE-ey.", "positive_passages": [{"docid": "doc-en-rust-1dea7b9f653c61585e6991495888bcfe2316a36e899e163db85e556d81eeb09d", "text": "fn main() { let x = 4u32; let y : &Foo let fl = unsafe { fool(y as *const Foo let fl = unsafe { round_trip_and_call(y as *const Foo // ptr-ptr-cast (both vk=Length) // ptr-ptr-cast (Length vtables) let mut l : [u8; 2] = [0,1]; let w: *mut [u16; 2] = &mut l as *mut [u8; 2] as *mut _; let w: *mut [u16] = unsafe {&mut *w};", "commid": "rust_pr_26394"}], "negative_passages": []}
{"query_id": "q-en-rust-d4a593b434e97fdf75553db892a4a97b6111c11da0bceb895600abdb30f17120", "query": "Much of the implementation work for rust-lang/rfcs has been done, but there remain some corner cases to fix: [ ] Casts between raw pointers to different traits are accepted but should not be (e.g. ) [ ] Double-check other cases, ensure adequate tests exist\ntriage: P-high -- I'm giving this high priority since accepting casts that we ought not to is a backwards compatibility risk, though it is mitigated by the fact that afaik the illegal casts yield an ICE (but I haven't done exhaustive testing)\nThe newly-illegal casts shouldn't be ICE-s (I mean, on nightly). Casts on stable are just randomly ICE-ey.", "positive_passages": [{"docid": "doc-en-rust-4504e8170ecd5fcc46caed139f9442f6496ed748643695d70f698bbbe173cbbc", "text": "let l_via_str = unsafe{&*(s as *const [u8])}; assert_eq!(&l, l_via_str); // ptr-ptr-cast (Length vtables, check length is preserved) let l: [[u8; 3]; 2] = [[3, 2, 6], [4, 5, 1]]; let p: *const [[u8; 3]] = &l; let p: &[[u8; 2]] = unsafe {&*(p as *const [[u8; 2]])}; assert_eq!(p, [[3, 2], [6, 4]]); // enum-cast assert_eq!(Simple::A as u8, 0); assert_eq!(Simple::B as u8, 1);", "commid": "rust_pr_26394"}], "negative_passages": []}
{"query_id": "q-en-rust-03196d1b720ff7ae1cf9aa1cc66dd1f5ed7c199530aca635881095af0585155a", "query": "Right now, a fn implementing an iface method must match the purity of the iface method exactly. This is too strict. We should allow a pure fn to implement an impure fn, and both pure/impure to implement an unsafe fn. UPDATE: Updated title to reflect the real problem here. There is a FIXME in the code in the relevant area. If one declares a trait like: then in the impl, the method should take one type parameter with an bound. We don't really check this correctly though. As far as I can tell, we just check that has one type parameter with one bound, but not precisely what kind of bound it is.\nSimilarly, in , we require that the bounds on the method parameters be precisely equivalent, which is stronger than necessary. It is sufficient (I believe) that the bounds on the trait be stronger than the bounds on the implementation.\nIs this still an issue? It seems to be fixed now.\nStill an issue with respect to type parameter bounds.\nI don't believe this is backwards incompatible, renominating.\naccepted for feature-complete milestone\nThe following appears to work, but I assume that this is what was talking about:\ntriage bump. nothing to add.\nAccepted for P-backcompat-lang\nAdded UPDATE to main bug description above.\nI had a look at this issue and tried to create a test case. does test case covers the issue you mentioned? This test case fails because it seems like the parameter types are checked in\nwhat error does it fail with? In the you need to use a different type bound for in order to test this issue. To be more precise, the bounds of the implementation should be implied by the bounds of the trait. For example, this should fail:\nthanks for the hint, I guess your updated example should indeed test this issue. I'll start working on a patch.", "positive_passages": [{"docid": "doc-en-rust-caec82172341739f7b261010d5823d336df96af99574a7495613d264c6f9d82c", "text": "let impl_m = &cm.mty; if impl_m.fty.meta.purity != trait_m.fty.meta.purity { tcx.sess.span_err( cm.span, fmt!(\"method `%s`'s purity does not match the trait method's purity\", tcx.sess.str_of(impl_m.ident))); } // is this check right? // FIXME(#2687)---this check is too strict. For example, a trait // method with self type `&self` or `&mut self` should be // implementable by an `&const self` method (the impl assumes less // than the trait provides). if impl_m.self_ty != trait_m.self_ty { tcx.sess.span_err( cm.span,", "commid": "rust_pr_3873"}], "negative_passages": []}
{"query_id": "q-en-rust-03196d1b720ff7ae1cf9aa1cc66dd1f5ed7c199530aca635881095af0585155a", "query": "Right now, a fn implementing an iface method must match the purity of the iface method exactly. This is too strict. We should allow a pure fn to implement an impure fn, and both pure/impure to implement an unsafe fn. UPDATE: Updated title to reflect the real problem here. There is a FIXME in the code in the relevant area. If one declares a trait like: then in the impl, the method should take one type parameter with an bound. We don't really check this correctly though. As far as I can tell, we just check that has one type parameter with one bound, but not precisely what kind of bound it is.\nSimilarly, in , we require that the bounds on the method parameters be precisely equivalent, which is stronger than necessary. It is sufficient (I believe) that the bounds on the trait be stronger than the bounds on the implementation.\nIs this still an issue? It seems to be fixed now.\nStill an issue with respect to type parameter bounds.\nI don't believe this is backwards incompatible, renominating.\naccepted for feature-complete milestone\nThe following appears to work, but I assume that this is what was talking about:\ntriage bump. nothing to add.\nAccepted for P-backcompat-lang\nAdded UPDATE to main bug description above.\nI had a look at this issue and tried to create a test case. does test case covers the issue you mentioned? This test case fails because it seems like the parameter types are checked in\nwhat error does it fail with? In the you need to use a different type bound for in order to test this issue. To be more precise, the bounds of the implementation should be implied by the bounds of the trait. For example, this should fail:\nthanks for the hint, I guess your updated example should indeed test this issue. I'll start working on a patch.", "positive_passages": [{"docid": "doc-en-rust-b902e8c01413639a602dfadb7f12f9af28e0c2733f74dc8724c2a7c761329465", "text": "return; } // FIXME(#2687)---we should be checking that the bounds of the // trait imply the bounds of the subtype, but it appears // we are...not checking this. for trait_m.tps.eachi() |i, trait_param_bounds| { // For each of the corresponding impl ty param's bounds... let impl_param_bounds = impl_m.tps[i];", "commid": "rust_pr_3873"}], "negative_passages": []}
{"query_id": "q-en-rust-03196d1b720ff7ae1cf9aa1cc66dd1f5ed7c199530aca635881095af0585155a", "query": "Right now, a fn implementing an iface method must match the purity of the iface method exactly. This is too strict. We should allow a pure fn to implement an impure fn, and both pure/impure to implement an unsafe fn. UPDATE: Updated title to reflect the real problem here. There is a FIXME in the code in the relevant area. If one declares a trait like: then in the impl, the method should take one type parameter with an bound. We don't really check this correctly though. As far as I can tell, we just check that has one type parameter with one bound, but not precisely what kind of bound it is.\nSimilarly, in , we require that the bounds on the method parameters be precisely equivalent, which is stronger than necessary. It is sufficient (I believe) that the bounds on the trait be stronger than the bounds on the implementation.\nIs this still an issue? It seems to be fixed now.\nStill an issue with respect to type parameter bounds.\nI don't believe this is backwards incompatible, renominating.\naccepted for feature-complete milestone\nThe following appears to work, but I assume that this is what was talking about:\ntriage bump. nothing to add.\nAccepted for P-backcompat-lang\nAdded UPDATE to main bug description above.\nI had a look at this issue and tried to create a test case. does test case covers the issue you mentioned? This test case fails because it seems like the parameter types are checked in\nwhat error does it fail with? In the you need to use a different type bound for in order to test this issue. To be more precise, the bounds of the implementation should be implied by the bounds of the trait. For example, this should fail:\nthanks for the hint, I guess your updated example should indeed test this issue. I'll start working on a patch.", "positive_passages": [{"docid": "doc-en-rust-0570e7ca245bccd66eab5eaa5b761bdcb1e3fc04b60fee1b55c8203304ef0740", "text": "debug!(\"trait_fty (pre-subst): %s\", ty_to_str(tcx, trait_fty)); ty::subst(tcx, &substs, trait_fty) }; debug!(\"trait_fty: %s\", ty_to_str(tcx, trait_fty)); require_same_types( tcx, None, false, cm.span, impl_fty, trait_fty, || fmt!(\"method `%s` has an incompatible type\", tcx.sess.str_of(trait_m.ident))); let infcx = infer::new_infer_ctxt(tcx); match infer::mk_subty(infcx, false, cm.span, impl_fty, trait_fty) { result::Ok(()) => {} result::Err(ref terr) => { tcx.sess.span_err( cm.span, fmt!(\"method `%s` has an incompatible type: %s\", tcx.sess.str_of(trait_m.ident), ty::type_err_to_str(tcx, terr))); ty::note_and_explain_type_err(tcx, terr); } } return; // Replaces bound references to the self region with `with_r`.", "commid": "rust_pr_3873"}], "negative_passages": []}
{"query_id": "q-en-rust-03196d1b720ff7ae1cf9aa1cc66dd1f5ed7c199530aca635881095af0585155a", "query": "Right now, a fn implementing an iface method must match the purity of the iface method exactly. This is too strict. We should allow a pure fn to implement an impure fn, and both pure/impure to implement an unsafe fn. UPDATE: Updated title to reflect the real problem here. There is a FIXME in the code in the relevant area. If one declares a trait like: then in the impl, the method should take one type parameter with an bound. We don't really check this correctly though. As far as I can tell, we just check that has one type parameter with one bound, but not precisely what kind of bound it is.\nSimilarly, in , we require that the bounds on the method parameters be precisely equivalent, which is stronger than necessary. It is sufficient (I believe) that the bounds on the trait be stronger than the bounds on the implementation.\nIs this still an issue? It seems to be fixed now.\nStill an issue with respect to type parameter bounds.\nI don't believe this is backwards incompatible, renominating.\naccepted for feature-complete milestone\nThe following appears to work, but I assume that this is what was talking about:\ntriage bump. nothing to add.\nAccepted for P-backcompat-lang\nAdded UPDATE to main bug description above.\nI had a look at this issue and tried to create a test case. does test case covers the issue you mentioned? This test case fails because it seems like the parameter types are checked in\nwhat error does it fail with? In the you need to use a different type bound for in order to test this issue. To be more precise, the bounds of the implementation should be implied by the bounds of the trait. For example, this should fail:\nthanks for the hint, I guess your updated example should indeed test this issue. I'll start working on a patch.", "positive_passages": [{"docid": "doc-en-rust-bf3dd5226517d75e056d0b6175d50ee7feea04bdd81f483f891899405c40de43", "text": " trait Mumbo { pure fn jumbo(&self, x: @uint) -> uint; fn jambo(&self, x: @const uint) -> uint; fn jbmbo(&self) -> @uint; } impl uint: Mumbo { // Cannot have a larger effect than the trait: fn jumbo(&self, x: @uint) { *self + *x; } //~^ ERROR expected pure fn but found impure fn // Cannot accept a narrower range of parameters: fn jambo(&self, x: @uint) { *self + *x; } //~^ ERROR values differ in mutability // Cannot return a wider range of values: fn jbmbo(&self) -> @const uint { @const 0 } //~^ ERROR values differ in mutability } fn main() {} ", "commid": "rust_pr_3873"}], "negative_passages": []}
{"query_id": "q-en-rust-03196d1b720ff7ae1cf9aa1cc66dd1f5ed7c199530aca635881095af0585155a", "query": "Right now, a fn implementing an iface method must match the purity of the iface method exactly. This is too strict. We should allow a pure fn to implement an impure fn, and both pure/impure to implement an unsafe fn. UPDATE: Updated title to reflect the real problem here. There is a FIXME in the code in the relevant area. If one declares a trait like: then in the impl, the method should take one type parameter with an bound. We don't really check this correctly though. As far as I can tell, we just check that has one type parameter with one bound, but not precisely what kind of bound it is.\nSimilarly, in , we require that the bounds on the method parameters be precisely equivalent, which is stronger than necessary. It is sufficient (I believe) that the bounds on the trait be stronger than the bounds on the implementation.\nIs this still an issue? It seems to be fixed now.\nStill an issue with respect to type parameter bounds.\nI don't believe this is backwards incompatible, renominating.\naccepted for feature-complete milestone\nThe following appears to work, but I assume that this is what was talking about:\ntriage bump. nothing to add.\nAccepted for P-backcompat-lang\nAdded UPDATE to main bug description above.\nI had a look at this issue and tried to create a test case. does test case covers the issue you mentioned? This test case fails because it seems like the parameter types are checked in\nwhat error does it fail with? In the you need to use a different type bound for in order to test this issue. To be more precise, the bounds of the implementation should be implied by the bounds of the trait. For example, this should fail:\nthanks for the hint, I guess your updated example should indeed test this issue. I'll start working on a patch.", "positive_passages": [{"docid": "doc-en-rust-500537c687a9e841c09ee5facf3fe1da66a6c3bd64b6b527245f1c0d52622760", "text": " trait Mumbo { fn jumbo(&self, x: @uint) -> uint; } impl uint: Mumbo { // Note: this method def is ok, it is more accepting and // less effecting than the trait method: pure fn jumbo(&self, x: @const uint) -> uint { *self + *x } } fn main() { let a = 3u; let b = a.jumbo(@mut 6); let x = @a as @Mumbo; let y = x.jumbo(@mut 6); //~ ERROR values differ in mutability let z = x.jumbo(@6); } ", "commid": "rust_pr_3873"}], "negative_passages": []}
{"query_id": "q-en-rust-dc29692e329c492975de930903284b0a8df2e4072e6472451b55244c5ec17708", "query": "Some : The Lifetimes chapter currently has: which does say \"declares our lifetimes\", so maybe further explanation isn't appropriate here, but it looks like the Functions chapter is sticking to the bare minimum. I'm not sure where this would fit, but I think the could use another sentence or two of explanation-- \"This is where you declare lifetime and generic type parameters that this function is going to use\" and link to the appropriate chapters for that perhaps?", "positive_passages": [{"docid": "doc-en-rust-f2d5d42807374dcc52259d67ec5852c9c214b55e039e3dd0a5c827b150c928d2", "text": "fn bar<'a>(...) ``` This part declares our lifetimes. This says that `bar` has one lifetime, `'a`. If we had two reference parameters, it would look like this: We previously talked a little about [function syntax][functions], but we didn\u2019t discuss the `<>`s after a function\u2019s name. A function can have \u2018generic parameters\u2019 between the `<>`s, of which lifetimes are one kind. We\u2019ll discuss other kinds of generics [later in the book][generics], but for now, let\u2019s just focus on the lifteimes aspect. [functions]: functions.html [generics]: generics.html We use `<>` to declare our lifetimes. This says that `bar` has one lifetime, `'a`. If we had two reference parameters, it would look like this: ```rust,ignore fn bar<'a, 'b>(...)", "commid": "rust_pr_27538"}], "negative_passages": []}
{"query_id": "q-en-rust-cb3c8dda2a453e2d7f4cc7937e7c8f83933dee49c111d7b8d5c3c15b4f3fa421", "query": "Tracking issue for the feature, which is and the associated enum.\nYes, please. Why? Makes little sense.\nI disagree that it makes little sense to expose getters, it's possible to envision a situation where one formatting is composed of many others and you perhaps want some to be formatted to various widths or perhaps enable various flags (e.g. the \"alternate\" mode). It's certainly niche but it's somewhat necessary for completeness.\n, , and all seem straightforward and can be stabilized as-is. I would also very much like to add and stabilize boolean accessors for the various parts of : , , , and (we could call the last one but that seems like a genuinely excessive name). It's pretty unfortunate we stabilized the accessor in the first place IMO, but oh well. I'm a little less sure about , though. What is returned if no fill character was specified? Why doesn't it return an like the others? Is sufficient? Should it be a full grapheme instead?\nI'd be fine just deprecating the method (it's basically useless without constants anyway) and adding boolean accessors, I agree that it better matches design in today's Rust. I'd also be fine making return an and just specifying that it's always one unicode character (it's how the format string is parsed). Perhaps in theory a grapheme could be specified but that seems like something for an even fancier formatting system!\nThis issue is now entering its cycle-long FCP for stabilization in 1.5 The accessors being in will also be considered for stabilization.\nAh, and may be good to get your opinion on the grapheme-vs- situation here. Currently whenever something is formatted you have the ability to specify a \"fill\" character which for when padding is applied (normally this is just an ascii space). As points out this fill is currently just a , but it could in theory be a grapheme in terms of \"one thing on the screen\". Do you have an opinion either way in that regard? Curious to hear thoughts!\nUh. It\u2019s not just graphemes. The whole // thing in as currently written is based on some assumptions: \u2019re printing to something that (like most terminal emulators) align text on a grid . Unfortunately, as often with Unicode, it\u2019s complicated. Not only grapheme clusters can have more than one code point and still only use one slot (with combining code points), but most characters of some Asian languages and most emoji are \u201cfull width\u201d: they\u2019re twice the usual width in most monospace fonts. Control characters be displayed. Or they might be interpreted by the terminal to move the cursor around. has some more background. Here is an extract, about emoji flags: Our best bet for \u201dwhat is the width of this string?\u201d is probably That leaves dealing with if it\u2019s double width or a control character. If we want to do it, could be an arbitrary string rather than a single ? Or should be rejected?\nHm, those are very good points! I would be mostly tempted to just pare down everything and say it largely deals with ascii only (and maybe simple unicode?). I don't think it'd be too beneficial to start getting full-blown unicode-width support in libcore just to support a use case like this. Having some verification in the compiler, however, to make sure you're not doing crazy things seems reasonable? In theory, yes. I'd also be fine with this!\nThe libs team discussed this during triage today and the decision was to stabilize.\nTriage: looks like this was stabilized, closing!\nis still unstable, linking to this issue:\nI am a bit confused about the reason is still unstable. Is it because is only available from ? Is there anything I can help with to move this along?\nNominating for stabilization. I suspect that needs to be re-exported in std, but that seems like something trivial to solve while stabilizing. The only concern that I see left is that there appear to be 3 separate, but equivalent, definition of the enum in the various libraries. These should probably be de-duplicated, but again, since only one appears to be public, we can probably do this trivially as well during stabilization.\nI think we should remove and return an instead, but it seems reasonable to stabilize. fcp merge\nTeam member has proposed to merge this. The next step is review by the rest of the tagged teams: [x] [x] [x] [x] [x] [x] [x] [x] No concerns currently listed. Once these reviewers reach consensus, this will enter its final comment period. If you spot a major issue that hasn't been raised at any point in this process, please speak up! See for info about what commands tagged team members can give me.\n:bell: This is now entering its final comment period, as per the . :bell:\n:bell: This is now entering its final comment period, as per the . :bell:\nThe final comment period is now complete.\nShouldn't this get merged then?\nYes, with FCP to stabilize completed, the next step is a stabilization PR.", "positive_passages": [{"docid": "doc-en-rust-408b716fb47786f807f48dc315e134b4ac30c3ca7a7a01b28aaaa8c6d3ec3b34", "text": "let mut sign = None; if !is_positive { sign = Some('-'); width += 1; } else if self.flags & (1 << (FlagV1::SignPlus as u32)) != 0 { } else if self.sign_plus() { sign = Some('+'); width += 1; } let mut prefixed = false; if self.flags & (1 << (FlagV1::Alternate as u32)) != 0 { if self.alternate() { prefixed = true; width += prefix.char_len(); }", "commid": "rust_pr_28615"}], "negative_passages": []}
{"query_id": "q-en-rust-cb3c8dda2a453e2d7f4cc7937e7c8f83933dee49c111d7b8d5c3c15b4f3fa421", "query": "Tracking issue for the feature, which is and the associated enum.\nYes, please. Why? Makes little sense.\nI disagree that it makes little sense to expose getters, it's possible to envision a situation where one formatting is composed of many others and you perhaps want some to be formatted to various widths or perhaps enable various flags (e.g. the \"alternate\" mode). It's certainly niche but it's somewhat necessary for completeness.\n, , and all seem straightforward and can be stabilized as-is. I would also very much like to add and stabilize boolean accessors for the various parts of : , , , and (we could call the last one but that seems like a genuinely excessive name). It's pretty unfortunate we stabilized the accessor in the first place IMO, but oh well. I'm a little less sure about , though. What is returned if no fill character was specified? Why doesn't it return an like the others? Is sufficient? Should it be a full grapheme instead?\nI'd be fine just deprecating the method (it's basically useless without constants anyway) and adding boolean accessors, I agree that it better matches design in today's Rust. I'd also be fine making return an and just specifying that it's always one unicode character (it's how the format string is parsed). Perhaps in theory a grapheme could be specified but that seems like something for an even fancier formatting system!\nThis issue is now entering its cycle-long FCP for stabilization in 1.5 The accessors being in will also be considered for stabilization.\nAh, and may be good to get your opinion on the grapheme-vs- situation here. Currently whenever something is formatted you have the ability to specify a \"fill\" character which for when padding is applied (normally this is just an ascii space). As points out this fill is currently just a , but it could in theory be a grapheme in terms of \"one thing on the screen\". Do you have an opinion either way in that regard? Curious to hear thoughts!\nUh. It\u2019s not just graphemes. The whole // thing in as currently written is based on some assumptions: \u2019re printing to something that (like most terminal emulators) align text on a grid . Unfortunately, as often with Unicode, it\u2019s complicated. Not only grapheme clusters can have more than one code point and still only use one slot (with combining code points), but most characters of some Asian languages and most emoji are \u201cfull width\u201d: they\u2019re twice the usual width in most monospace fonts. Control characters be displayed. Or they might be interpreted by the terminal to move the cursor around. has some more background. Here is an extract, about emoji flags: Our best bet for \u201dwhat is the width of this string?\u201d is probably That leaves dealing with if it\u2019s double width or a control character. If we want to do it, could be an arbitrary string rather than a single ? Or should be rejected?\nHm, those are very good points! I would be mostly tempted to just pare down everything and say it largely deals with ascii only (and maybe simple unicode?). I don't think it'd be too beneficial to start getting full-blown unicode-width support in libcore just to support a use case like this. Having some verification in the compiler, however, to make sure you're not doing crazy things seems reasonable? In theory, yes. I'd also be fine with this!\nThe libs team discussed this during triage today and the decision was to stabilize.\nTriage: looks like this was stabilized, closing!\nis still unstable, linking to this issue:\nI am a bit confused about the reason is still unstable. Is it because is only available from ? Is there anything I can help with to move this along?\nNominating for stabilization. I suspect that needs to be re-exported in std, but that seems like something trivial to solve while stabilizing. The only concern that I see left is that there appear to be 3 separate, but equivalent, definition of the enum in the various libraries. These should probably be de-duplicated, but again, since only one appears to be public, we can probably do this trivially as well during stabilization.\nI think we should remove and return an instead, but it seems reasonable to stabilize. fcp merge\nTeam member has proposed to merge this. The next step is review by the rest of the tagged teams: [x] [x] [x] [x] [x] [x] [x] [x] No concerns currently listed. Once these reviewers reach consensus, this will enter its final comment period. If you spot a major issue that hasn't been raised at any point in this process, please speak up! See for info about what commands tagged team members can give me.\n:bell: This is now entering its final comment period, as per the . :bell:\n:bell: This is now entering its final comment period, as per the . :bell:\nThe final comment period is now complete.\nShouldn't this get merged then?\nYes, with FCP to stabilize completed, the next step is a stabilization PR.", "positive_passages": [{"docid": "doc-en-rust-c9c1cf0acb0342c33397d0bc973020c4513ca10cc1f4f7f014a27ada71ac7c45", "text": "} // The sign and prefix goes before the padding if the fill character // is zero Some(min) if self.flags & (1 << (FlagV1::SignAwareZeroPad as u32)) != 0 => { Some(min) if self.sign_aware_zero_pad() => { self.fill = '0'; try!(write_prefix(self)); self.with_padding(min - width, Alignment::Right, |f| {", "commid": "rust_pr_28615"}], "negative_passages": []}
{"query_id": "q-en-rust-cb3c8dda2a453e2d7f4cc7937e7c8f83933dee49c111d7b8d5c3c15b4f3fa421", "query": "Tracking issue for the feature, which is and the associated enum.\nYes, please. Why? Makes little sense.\nI disagree that it makes little sense to expose getters, it's possible to envision a situation where one formatting is composed of many others and you perhaps want some to be formatted to various widths or perhaps enable various flags (e.g. the \"alternate\" mode). It's certainly niche but it's somewhat necessary for completeness.\n, , and all seem straightforward and can be stabilized as-is. I would also very much like to add and stabilize boolean accessors for the various parts of : , , , and (we could call the last one but that seems like a genuinely excessive name). It's pretty unfortunate we stabilized the accessor in the first place IMO, but oh well. I'm a little less sure about , though. What is returned if no fill character was specified? Why doesn't it return an like the others? Is sufficient? Should it be a full grapheme instead?\nI'd be fine just deprecating the method (it's basically useless without constants anyway) and adding boolean accessors, I agree that it better matches design in today's Rust. I'd also be fine making return an and just specifying that it's always one unicode character (it's how the format string is parsed). Perhaps in theory a grapheme could be specified but that seems like something for an even fancier formatting system!\nThis issue is now entering its cycle-long FCP for stabilization in 1.5 The accessors being in will also be considered for stabilization.\nAh, and may be good to get your opinion on the grapheme-vs- situation here. Currently whenever something is formatted you have the ability to specify a \"fill\" character which for when padding is applied (normally this is just an ascii space). As points out this fill is currently just a , but it could in theory be a grapheme in terms of \"one thing on the screen\". Do you have an opinion either way in that regard? Curious to hear thoughts!\nUh. It\u2019s not just graphemes. The whole // thing in as currently written is based on some assumptions: \u2019re printing to something that (like most terminal emulators) align text on a grid . Unfortunately, as often with Unicode, it\u2019s complicated. Not only grapheme clusters can have more than one code point and still only use one slot (with combining code points), but most characters of some Asian languages and most emoji are \u201cfull width\u201d: they\u2019re twice the usual width in most monospace fonts. Control characters be displayed. Or they might be interpreted by the terminal to move the cursor around. has some more background. Here is an extract, about emoji flags: Our best bet for \u201dwhat is the width of this string?\u201d is probably That leaves dealing with if it\u2019s double width or a control character. If we want to do it, could be an arbitrary string rather than a single ? Or should be rejected?\nHm, those are very good points! I would be mostly tempted to just pare down everything and say it largely deals with ascii only (and maybe simple unicode?). I don't think it'd be too beneficial to start getting full-blown unicode-width support in libcore just to support a use case like this. Having some verification in the compiler, however, to make sure you're not doing crazy things seems reasonable? In theory, yes. I'd also be fine with this!\nThe libs team discussed this during triage today and the decision was to stabilize.\nTriage: looks like this was stabilized, closing!\nis still unstable, linking to this issue:\nI am a bit confused about the reason is still unstable. Is it because is only available from ? Is there anything I can help with to move this along?\nNominating for stabilization. I suspect that needs to be re-exported in std, but that seems like something trivial to solve while stabilizing. The only concern that I see left is that there appear to be 3 separate, but equivalent, definition of the enum in the various libraries. These should probably be de-duplicated, but again, since only one appears to be public, we can probably do this trivially as well during stabilization.\nI think we should remove and return an instead, but it seems reasonable to stabilize. fcp merge\nTeam member has proposed to merge this. The next step is review by the rest of the tagged teams: [x] [x] [x] [x] [x] [x] [x] [x] No concerns currently listed. Once these reviewers reach consensus, this will enter its final comment period. If you spot a major issue that hasn't been raised at any point in this process, please speak up! See for info about what commands tagged team members can give me.\n:bell: This is now entering its final comment period, as per the . :bell:\n:bell: This is now entering its final comment period, as per the . :bell:\nThe final comment period is now complete.\nShouldn't this get merged then?\nYes, with FCP to stabilize completed, the next step is a stabilization PR.", "positive_passages": [{"docid": "doc-en-rust-d7734815f6ce1d7b906f0ab305877f7beb3d3478f6321805929325128f4a4c24", "text": "let mut formatted = formatted.clone(); let mut align = self.align; let old_fill = self.fill; if self.flags & (1 << (FlagV1::SignAwareZeroPad as u32)) != 0 { if self.sign_aware_zero_pad() { // a sign always goes first let sign = unsafe { str::from_utf8_unchecked(formatted.sign) }; try!(self.buf.write_str(sign));", "commid": "rust_pr_28615"}], "negative_passages": []}
{"query_id": "q-en-rust-cb3c8dda2a453e2d7f4cc7937e7c8f83933dee49c111d7b8d5c3c15b4f3fa421", "query": "Tracking issue for the feature, which is and the associated enum.\nYes, please. Why? Makes little sense.\nI disagree that it makes little sense to expose getters, it's possible to envision a situation where one formatting is composed of many others and you perhaps want some to be formatted to various widths or perhaps enable various flags (e.g. the \"alternate\" mode). It's certainly niche but it's somewhat necessary for completeness.\n, , and all seem straightforward and can be stabilized as-is. I would also very much like to add and stabilize boolean accessors for the various parts of : , , , and (we could call the last one but that seems like a genuinely excessive name). It's pretty unfortunate we stabilized the accessor in the first place IMO, but oh well. I'm a little less sure about , though. What is returned if no fill character was specified? Why doesn't it return an like the others? Is sufficient? Should it be a full grapheme instead?\nI'd be fine just deprecating the method (it's basically useless without constants anyway) and adding boolean accessors, I agree that it better matches design in today's Rust. I'd also be fine making return an and just specifying that it's always one unicode character (it's how the format string is parsed). Perhaps in theory a grapheme could be specified but that seems like something for an even fancier formatting system!\nThis issue is now entering its cycle-long FCP for stabilization in 1.5 The accessors being in will also be considered for stabilization.\nAh, and may be good to get your opinion on the grapheme-vs- situation here. Currently whenever something is formatted you have the ability to specify a \"fill\" character which for when padding is applied (normally this is just an ascii space). As points out this fill is currently just a , but it could in theory be a grapheme in terms of \"one thing on the screen\". Do you have an opinion either way in that regard? Curious to hear thoughts!\nUh. It\u2019s not just graphemes. The whole // thing in as currently written is based on some assumptions: \u2019re printing to something that (like most terminal emulators) align text on a grid . Unfortunately, as often with Unicode, it\u2019s complicated. Not only grapheme clusters can have more than one code point and still only use one slot (with combining code points), but most characters of some Asian languages and most emoji are \u201cfull width\u201d: they\u2019re twice the usual width in most monospace fonts. Control characters be displayed. Or they might be interpreted by the terminal to move the cursor around. has some more background. Here is an extract, about emoji flags: Our best bet for \u201dwhat is the width of this string?\u201d is probably That leaves dealing with if it\u2019s double width or a control character. If we want to do it, could be an arbitrary string rather than a single ? Or should be rejected?\nHm, those are very good points! I would be mostly tempted to just pare down everything and say it largely deals with ascii only (and maybe simple unicode?). I don't think it'd be too beneficial to start getting full-blown unicode-width support in libcore just to support a use case like this. Having some verification in the compiler, however, to make sure you're not doing crazy things seems reasonable? In theory, yes. I'd also be fine with this!\nThe libs team discussed this during triage today and the decision was to stabilize.\nTriage: looks like this was stabilized, closing!\nis still unstable, linking to this issue:\nI am a bit confused about the reason is still unstable. Is it because is only available from ? Is there anything I can help with to move this along?\nNominating for stabilization. I suspect that needs to be re-exported in std, but that seems like something trivial to solve while stabilizing. The only concern that I see left is that there appear to be 3 separate, but equivalent, definition of the enum in the various libraries. These should probably be de-duplicated, but again, since only one appears to be public, we can probably do this trivially as well during stabilization.\nI think we should remove and return an instead, but it seems reasonable to stabilize. fcp merge\nTeam member has proposed to merge this. The next step is review by the rest of the tagged teams: [x] [x] [x] [x] [x] [x] [x] [x] No concerns currently listed. Once these reviewers reach consensus, this will enter its final comment period. If you spot a major issue that hasn't been raised at any point in this process, please speak up! See for info about what commands tagged team members can give me.\n:bell: This is now entering its final comment period, as per the . :bell:\n:bell: This is now entering its final comment period, as per the . :bell:\nThe final comment period is now complete.\nShouldn't this get merged then?\nYes, with FCP to stabilize completed, the next step is a stabilization PR.", "positive_passages": [{"docid": "doc-en-rust-01f77f18243bddaeca35fc50314de03cb4b44453ab63b886b0c01531b02cd507", "text": "issue = \"27726\")] pub fn precision(&self) -> Option if f.flags & 1 << (FlagV1::Alternate as u32) > 0 { if f.alternate() { f.flags |= 1 << (FlagV1::SignAwareZeroPad as u32); if let None = f.width {", "commid": "rust_pr_28615"}], "negative_passages": []}
{"query_id": "q-en-rust-cb3c8dda2a453e2d7f4cc7937e7c8f83933dee49c111d7b8d5c3c15b4f3fa421", "query": "Tracking issue for the feature, which is and the associated enum.\nYes, please. Why? Makes little sense.\nI disagree that it makes little sense to expose getters, it's possible to envision a situation where one formatting is composed of many others and you perhaps want some to be formatted to various widths or perhaps enable various flags (e.g. the \"alternate\" mode). It's certainly niche but it's somewhat necessary for completeness.\n, , and all seem straightforward and can be stabilized as-is. I would also very much like to add and stabilize boolean accessors for the various parts of : , , , and (we could call the last one but that seems like a genuinely excessive name). It's pretty unfortunate we stabilized the accessor in the first place IMO, but oh well. I'm a little less sure about , though. What is returned if no fill character was specified? Why doesn't it return an like the others? Is sufficient? Should it be a full grapheme instead?\nI'd be fine just deprecating the method (it's basically useless without constants anyway) and adding boolean accessors, I agree that it better matches design in today's Rust. I'd also be fine making return an and just specifying that it's always one unicode character (it's how the format string is parsed). Perhaps in theory a grapheme could be specified but that seems like something for an even fancier formatting system!\nThis issue is now entering its cycle-long FCP for stabilization in 1.5 The accessors being in will also be considered for stabilization.\nAh, and may be good to get your opinion on the grapheme-vs- situation here. Currently whenever something is formatted you have the ability to specify a \"fill\" character which for when padding is applied (normally this is just an ascii space). As points out this fill is currently just a , but it could in theory be a grapheme in terms of \"one thing on the screen\". Do you have an opinion either way in that regard? Curious to hear thoughts!\nUh. It\u2019s not just graphemes. The whole // thing in as currently written is based on some assumptions: \u2019re printing to something that (like most terminal emulators) align text on a grid . Unfortunately, as often with Unicode, it\u2019s complicated. Not only grapheme clusters can have more than one code point and still only use one slot (with combining code points), but most characters of some Asian languages and most emoji are \u201cfull width\u201d: they\u2019re twice the usual width in most monospace fonts. Control characters be displayed. Or they might be interpreted by the terminal to move the cursor around. has some more background. Here is an extract, about emoji flags: Our best bet for \u201dwhat is the width of this string?\u201d is probably That leaves dealing with if it\u2019s double width or a control character. If we want to do it, could be an arbitrary string rather than a single ? Or should be rejected?\nHm, those are very good points! I would be mostly tempted to just pare down everything and say it largely deals with ascii only (and maybe simple unicode?). I don't think it'd be too beneficial to start getting full-blown unicode-width support in libcore just to support a use case like this. Having some verification in the compiler, however, to make sure you're not doing crazy things seems reasonable? In theory, yes. I'd also be fine with this!\nThe libs team discussed this during triage today and the decision was to stabilize.\nTriage: looks like this was stabilized, closing!\nis still unstable, linking to this issue:\nI am a bit confused about the reason is still unstable. Is it because is only available from ? Is there anything I can help with to move this along?\nNominating for stabilization. I suspect that needs to be re-exported in std, but that seems like something trivial to solve while stabilizing. The only concern that I see left is that there appear to be 3 separate, but equivalent, definition of the enum in the various libraries. These should probably be de-duplicated, but again, since only one appears to be public, we can probably do this trivially as well during stabilization.\nI think we should remove and return an instead, but it seems reasonable to stabilize. fcp merge\nTeam member has proposed to merge this. The next step is review by the rest of the tagged teams: [x] [x] [x] [x] [x] [x] [x] [x] No concerns currently listed. Once these reviewers reach consensus, this will enter its final comment period. If you spot a major issue that hasn't been raised at any point in this process, please speak up! See for info about what commands tagged team members can give me.\n:bell: This is now entering its final comment period, as per the . :bell:\n:bell: This is now entering its final comment period, as per the . :bell:\nThe final comment period is now complete.\nShouldn't this get merged then?\nYes, with FCP to stabilize completed, the next step is a stabilization PR.", "positive_passages": [{"docid": "doc-en-rust-4fd85a1b7ccc1b049c44ba911c40e8b40aa89b849d0dad479204e75d0ae89b19", "text": "fn float_to_decimal_common let force_sign = fmt.flags & (1 << (FlagV1::SignPlus as u32)) != 0; let force_sign = fmt.sign_plus(); let sign = match (force_sign, negative_zero) { (false, false) => flt2dec::Sign::Minus, (false, true) => flt2dec::Sign::MinusRaw,", "commid": "rust_pr_28615"}], "negative_passages": []}
{"query_id": "q-en-rust-cb3c8dda2a453e2d7f4cc7937e7c8f83933dee49c111d7b8d5c3c15b4f3fa421", "query": "Tracking issue for the feature, which is and the associated enum.\nYes, please. Why? Makes little sense.\nI disagree that it makes little sense to expose getters, it's possible to envision a situation where one formatting is composed of many others and you perhaps want some to be formatted to various widths or perhaps enable various flags (e.g. the \"alternate\" mode). It's certainly niche but it's somewhat necessary for completeness.\n, , and all seem straightforward and can be stabilized as-is. I would also very much like to add and stabilize boolean accessors for the various parts of : , , , and (we could call the last one but that seems like a genuinely excessive name). It's pretty unfortunate we stabilized the accessor in the first place IMO, but oh well. I'm a little less sure about , though. What is returned if no fill character was specified? Why doesn't it return an like the others? Is sufficient? Should it be a full grapheme instead?\nI'd be fine just deprecating the method (it's basically useless without constants anyway) and adding boolean accessors, I agree that it better matches design in today's Rust. I'd also be fine making return an and just specifying that it's always one unicode character (it's how the format string is parsed). Perhaps in theory a grapheme could be specified but that seems like something for an even fancier formatting system!\nThis issue is now entering its cycle-long FCP for stabilization in 1.5 The accessors being in will also be considered for stabilization.\nAh, and may be good to get your opinion on the grapheme-vs- situation here. Currently whenever something is formatted you have the ability to specify a \"fill\" character which for when padding is applied (normally this is just an ascii space). As points out this fill is currently just a , but it could in theory be a grapheme in terms of \"one thing on the screen\". Do you have an opinion either way in that regard? Curious to hear thoughts!\nUh. It\u2019s not just graphemes. The whole // thing in as currently written is based on some assumptions: \u2019re printing to something that (like most terminal emulators) align text on a grid . Unfortunately, as often with Unicode, it\u2019s complicated. Not only grapheme clusters can have more than one code point and still only use one slot (with combining code points), but most characters of some Asian languages and most emoji are \u201cfull width\u201d: they\u2019re twice the usual width in most monospace fonts. Control characters be displayed. Or they might be interpreted by the terminal to move the cursor around. has some more background. Here is an extract, about emoji flags: Our best bet for \u201dwhat is the width of this string?\u201d is probably That leaves dealing with if it\u2019s double width or a control character. If we want to do it, could be an arbitrary string rather than a single ? Or should be rejected?\nHm, those are very good points! I would be mostly tempted to just pare down everything and say it largely deals with ascii only (and maybe simple unicode?). I don't think it'd be too beneficial to start getting full-blown unicode-width support in libcore just to support a use case like this. Having some verification in the compiler, however, to make sure you're not doing crazy things seems reasonable? In theory, yes. I'd also be fine with this!\nThe libs team discussed this during triage today and the decision was to stabilize.\nTriage: looks like this was stabilized, closing!\nis still unstable, linking to this issue:\nI am a bit confused about the reason is still unstable. Is it because is only available from ? Is there anything I can help with to move this along?\nNominating for stabilization. I suspect that needs to be re-exported in std, but that seems like something trivial to solve while stabilizing. The only concern that I see left is that there appear to be 3 separate, but equivalent, definition of the enum in the various libraries. These should probably be de-duplicated, but again, since only one appears to be public, we can probably do this trivially as well during stabilization.\nI think we should remove and return an instead, but it seems reasonable to stabilize. fcp merge\nTeam member has proposed to merge this. The next step is review by the rest of the tagged teams: [x] [x] [x] [x] [x] [x] [x] [x] No concerns currently listed. Once these reviewers reach consensus, this will enter its final comment period. If you spot a major issue that hasn't been raised at any point in this process, please speak up! See for info about what commands tagged team members can give me.\n:bell: This is now entering its final comment period, as per the . :bell:\n:bell: This is now entering its final comment period, as per the . :bell:\nThe final comment period is now complete.\nShouldn't this get merged then?\nYes, with FCP to stabilize completed, the next step is a stabilization PR.", "positive_passages": [{"docid": "doc-en-rust-56bedd0a125417bb071cd1f797e30aa1c8973f38445d3c1b46001a08b47deda7", "text": "fn float_to_exponential_common let force_sign = fmt.flags & (1 << (FlagV1::SignPlus as u32)) != 0; let force_sign = fmt.sign_plus(); let sign = match force_sign { false => flt2dec::Sign::Minus, true => flt2dec::Sign::MinusPlus,", "commid": "rust_pr_28615"}], "negative_passages": []}
{"query_id": "q-en-rust-8388d7b1b57e76a9f621a117dc1a6278da99e1855ca97db4d2310863427ae0b4", "query": "This compiles and runs without warnings: I at least expected the unused attribute lint to catch this\nThey are not ignored, it's just that attributes on item macros apply to the macro expansion rather than the resultant item, e.g this errors as expected as the macro isn't expanded.\nAh, and they're removed when the macro is expanded, which is why the lint doesn't pick them up.\nYeah as noted this is actually working as intended, so closing.\nI still feel uneasy about the lint missing these attributes, but I guess that cannot be fixed unless the attribute system is rewritten", "positive_passages": [{"docid": "doc-en-rust-d37399cb719695b7f94573c73bfaae9b8e85014e0d623d2dc52b4457a53b6837", "text": "// // Local Variables: // mode: C++ // mode: rust // fill-column: 78; // indent-tabs-mode: nil // c-basic-offset: 4", "commid": "rust_pr_398"}], "negative_passages": []}
{"query_id": "q-en-rust-11ab28a076dfabf628b922f54c3fbd59c5db2b0bf776c31cdd624cc3012cc60b", "query": "When you search for something on the online docs (which I assume are generated using cargo doc) a list of results comes up. When you clear the search bar/press the X button it probably should clear the results to allow you to view the original page. Anyone know how to fix this?\nI don't believe that we have anything set up for that. Is there an \"X\" button? I don't see it anywhere. Regardless, clearing the box or pressing escape or something should probably clear it, for sure.\n! And then after clicking the \"X\": !\nSearch field is and therefore it might have various user agent specific functionality. Simpler way to reproduce in a cross-UA way would be to follow these steps: a search query; the whole query at once by highlighting it and clicking backspace, delete or any other relevant button; Result list not changing. We could go back to the page that was open before the search field was used to fix this. Note that we do know what page it was, because we simply append to the URI of the page we start the search at.\nI took a shot at it and in the half an hour I dedicated to it, all I found that search code is a complete mess and this might be harder to fix correctly than it might appear at a first glance.", "positive_passages": [{"docid": "doc-en-rust-e7d7c73d404ba3abc87ef5bc3675006d1d04271ca22478af988066b238167d40", "text": "} function startSearch() { $(\".search-input\").on(\"keyup\",function() { if ($(this).val().length === 0) { window.history.replaceState(\"\", \"std - Rust\", \"?search=\"); $('#main.content').removeClass('hidden'); $('#search.content').addClass('hidden'); } }); var keyUpTimeout; $('.do-search').on('click', search); $('.search-input').on('keyup', function() {", "commid": "rust_pr_28795"}], "negative_passages": []}
{"query_id": "q-en-rust-165546cc6d1664a37fe24db06bf255ae7f919e8684ae20907812cf93b20914b9", "query": "When trying to update uutils/coreutils to HEAD, I got this ICE: Version: code:\napparently, this happens as a side effect of the build process. i had built with stable, then tried to build with nightly, giving incorrect metadata", "positive_passages": [{"docid": "doc-en-rust-16655e1b66b04d4ea8eab2e01e78493177e16bbabf6080fec191548dc7724e81", "text": "E0495, // cannot infer an appropriate lifetime due to conflicting requirements E0496, // .. name `..` shadows a .. name that is already in scope E0498, // malformed plugin attribute E0514, // metadata version mismatch }", "commid": "rust_pr_28702"}], "negative_passages": []}
{"query_id": "q-en-rust-165546cc6d1664a37fe24db06bf255ae7f919e8684ae20907812cf93b20914b9", "query": "When trying to update uutils/coreutils to HEAD, I got this ICE: Version: code:\napparently, this happens as a side effect of the build process. i had built with stable, then tried to build with nightly, giving incorrect metadata", "positive_passages": [{"docid": "doc-en-rust-c482e36f801dd43bcbef4be3c51849d45e732064154bf16056292d54b5963a56", "text": "pub const tag_impl_coerce_unsized_kind: usize = 0xa5; pub const tag_items_data_item_constness: usize = 0xa6; pub const tag_rustc_version: usize = 0x10f; pub fn rustc_version() -> String { format!( \"rustc {}\", option_env!(\"CFG_VERSION\").unwrap_or(\"unknown version\") ) } ", "commid": "rust_pr_28702"}], "negative_passages": []}
{"query_id": "q-en-rust-165546cc6d1664a37fe24db06bf255ae7f919e8684ae20907812cf93b20914b9", "query": "When trying to update uutils/coreutils to HEAD, I got this ICE: Version: code:\napparently, this happens as a side effect of the build process. i had built with stable, then tried to build with nightly, giving incorrect metadata", "positive_passages": [{"docid": "doc-en-rust-a0df95f6c0155d1f330f444e6806b5c0ea06795206b068bd61f0d3707a12ce86", "text": "use back::svh::Svh; use session::{config, Session}; use session::search_paths::PathKind; use metadata::common::rustc_version; use metadata::cstore; use metadata::cstore::{CStore, CrateSource, MetadataBlob}; use metadata::decoder;", "commid": "rust_pr_28702"}], "negative_passages": []}
{"query_id": "q-en-rust-165546cc6d1664a37fe24db06bf255ae7f919e8684ae20907812cf93b20914b9", "query": "When trying to update uutils/coreutils to HEAD, I got this ICE: Version: code:\napparently, this happens as a side effect of the build process. i had built with stable, then tried to build with nightly, giving incorrect metadata", "positive_passages": [{"docid": "doc-en-rust-dc7528e35ac4f58f9db16503ee8f69b940c94dea1ed697dc2701068de8957af7", "text": "return ret; } fn verify_rustc_version(&self, name: &str, span: Span, metadata: &MetadataBlob) { let crate_rustc_version = decoder::crate_rustc_version(metadata.as_slice()); if crate_rustc_version != Some(rustc_version()) { span_err!(self.sess, span, E0514, \"the crate `{}` has been compiled with {}, which is incompatible with this version of rustc\", name, crate_rustc_version .as_ref().map(|s|&**s) .unwrap_or(\"an old version of rustc\") ); self.sess.abort_if_errors(); } } fn register_crate(&mut self, root: &Option pub struct ExprUseVisitor<'d, 't, 'a: 't, 'tcx:'a+'d+'t> { pub struct ExprUseVisitor<'d, 't, 'a: 't, 'tcx:'a+'d> { typer: &'t infer::InferCtxt<'a, 'tcx>, mc: mc::MemCategorizationContext<'t, 'a, 'tcx>, delegate: &'d mut Delegate<'tcx>,", "commid": "rust_pr_28702"}], "negative_passages": []}
{"query_id": "q-en-rust-165546cc6d1664a37fe24db06bf255ae7f919e8684ae20907812cf93b20914b9", "query": "When trying to update uutils/coreutils to HEAD, I got this ICE: Version: code:\napparently, this happens as a side effect of the build process. i had built with stable, then tried to build with nightly, giving incorrect metadata", "positive_passages": [{"docid": "doc-en-rust-b536b3aeb7a32be04f8b83bfe28ca29f11bd9ebcc2dbf2951bea982df2c84d03", "text": "impl<'d,'t,'a,'tcx> ExprUseVisitor<'d,'t,'a,'tcx> { pub fn new(delegate: &'d mut Delegate<'tcx>, typer: &'t infer::InferCtxt<'a, 'tcx>) -> ExprUseVisitor<'d,'t,'a,'tcx> -> ExprUseVisitor<'d,'t,'a,'tcx> where 'tcx:'a { ExprUseVisitor { typer: typer,", "commid": "rust_pr_28702"}], "negative_passages": []}
{"query_id": "q-en-rust-dca5aa8713ccc482463c7daa7fec86aae54a04f4400b3fb9be93d0f6724c246d", "query": "The first section on can be difficult to understand for novice programmers, particularly the line \"Anonymous functions that have an associated environment are called \u2018closures\u2019, because they close over an environment.\" It's difficult from this context to understand what \"environment\" is referring to.\nYeah, this definition is kind of embarassingly self-referential :sweat:\nI'm going to try to clarify this.\nfixed this", "positive_passages": [{"docid": "doc-en-rust-f879b28a420cb50f348a3c9897d52286e6156286ea222e37988a0687cabfbdfe", "text": "% Closures Rust not only has named functions, but anonymous functions as well. Anonymous functions that have an associated environment are called \u2018closures\u2019, because they close over an environment. Rust has a really great implementation of them, as we\u2019ll see. Sometimes it is useful to wrap up a function and _free variables_ for better clarity and reuse. The free variables that can be used come from the enclosing scope and are \u2018closed over\u2019 when used in the function. From this, we get the name \u2018closures\u2019 and Rust provides a really great implementation of them, as we\u2019ll see. # Syntax", "commid": "rust_pr_28856"}], "negative_passages": []}
{"query_id": "q-en-rust-dca5aa8713ccc482463c7daa7fec86aae54a04f4400b3fb9be93d0f6724c246d", "query": "The first section on can be difficult to understand for novice programmers, particularly the line \"Anonymous functions that have an associated environment are called \u2018closures\u2019, because they close over an environment.\" It's difficult from this context to understand what \"environment\" is referring to.\nYeah, this definition is kind of embarassingly self-referential :sweat:\nI'm going to try to clarify this.\nfixed this", "positive_passages": [{"docid": "doc-en-rust-b57792600f3fdfde7082944216655f0c6377540e26ac8e20d4b13cc27c2c38e8", "text": "``` You\u2019ll notice a few things about closures that are a bit different from regular functions defined with `fn`. The first is that we did not need to named functions defined with `fn`. The first is that we did not need to annotate the types of arguments the closure takes or the values it returns. We can:", "commid": "rust_pr_28856"}], "negative_passages": []}
{"query_id": "q-en-rust-dca5aa8713ccc482463c7daa7fec86aae54a04f4400b3fb9be93d0f6724c246d", "query": "The first section on can be difficult to understand for novice programmers, particularly the line \"Anonymous functions that have an associated environment are called \u2018closures\u2019, because they close over an environment.\" It's difficult from this context to understand what \"environment\" is referring to.\nYeah, this definition is kind of embarassingly self-referential :sweat:\nI'm going to try to clarify this.\nfixed this", "positive_passages": [{"docid": "doc-en-rust-997f7ad35b3af6173a7160aa267f7be5b0809cd920770d5c32a5f8827224721f", "text": "assert_eq!(2, plus_one(1)); ``` But we don\u2019t have to. Why is this? Basically, it was chosen for ergonomic reasons. While specifying the full type for named functions is helpful with things like documentation and type inference, the types of closures are rarely documented since they\u2019re anonymous, and they don\u2019t cause the kinds of error-at-a-distance problems that inferring named function types can. But we don\u2019t have to. Why is this? Basically, it was chosen for ergonomic reasons. While specifying the full type for named functions is helpful with things like documentation and type inference, the full type signatures of closures are rarely documented since they\u2019re anonymous, and they don\u2019t cause the kinds of error-at-a-distance problems that inferring named function types can. The second is that the syntax is similar, but a bit different. I\u2019ve added spaces here for easier comparison: The second is that the syntax is similar, but a bit different. I\u2019ve added spaces here for easier comparison: ```rust fn plus_one_v1 (x: i32) -> i32 { x + 1 }", "commid": "rust_pr_28856"}], "negative_passages": []}
{"query_id": "q-en-rust-dca5aa8713ccc482463c7daa7fec86aae54a04f4400b3fb9be93d0f6724c246d", "query": "The first section on can be difficult to understand for novice programmers, particularly the line \"Anonymous functions that have an associated environment are called \u2018closures\u2019, because they close over an environment.\" It's difficult from this context to understand what \"environment\" is referring to.\nYeah, this definition is kind of embarassingly self-referential :sweat:\nI'm going to try to clarify this.\nfixed this", "positive_passages": [{"docid": "doc-en-rust-32c942591c15ff961f71cac4329167bf81e26ff5c1fefedfd3368e65f5b6b47f", "text": "# Closures and their environment Closures are called such because they \u2018close over their environment\u2019. It looks like this: The environment for a closure can include bindings from its enclosing scope in addition to parameters and local bindings. It looks like this: ```rust let num = 5;", "commid": "rust_pr_28856"}], "negative_passages": []}
{"query_id": "q-en-rust-dca5aa8713ccc482463c7daa7fec86aae54a04f4400b3fb9be93d0f6724c246d", "query": "The first section on can be difficult to understand for novice programmers, particularly the line \"Anonymous functions that have an associated environment are called \u2018closures\u2019, because they close over an environment.\" It's difficult from this context to understand what \"environment\" is referring to.\nYeah, this definition is kind of embarassingly self-referential :sweat:\nI'm going to try to clarify this.\nfixed this", "positive_passages": [{"docid": "doc-en-rust-0de7cf9b66e3be6b35ae7f518b8d1b2e76eeeb642e759902776596c4f016a2c9", "text": "it, while a `move` closure is self-contained. This means that you cannot generally return a non-`move` closure from a function, for example. But before we talk about taking and returning closures, we should talk some more about the way that closures are implemented. As a systems language, Rust gives you tons of control over what your code does, and closures are no different. But before we talk about taking and returning closures, we should talk some more about the way that closures are implemented. As a systems language, Rust gives you tons of control over what your code does, and closures are no different. # Closure implementation", "commid": "rust_pr_28856"}], "negative_passages": []}
{"query_id": "q-en-rust-dca5aa8713ccc482463c7daa7fec86aae54a04f4400b3fb9be93d0f6724c246d", "query": "The first section on can be difficult to understand for novice programmers, particularly the line \"Anonymous functions that have an associated environment are called \u2018closures\u2019, because they close over an environment.\" It's difficult from this context to understand what \"environment\" is referring to.\nYeah, this definition is kind of embarassingly self-referential :sweat:\nI'm going to try to clarify this.\nfixed this", "positive_passages": [{"docid": "doc-en-rust-e336995a7a063325da7876e3c8a0d8c0c70be9613c42d087dfec827301d350d5", "text": "# some_closure(1) } ``` Because `Fn` is a trait, we can bound our generic with it. In this case, our closure takes a `i32` as an argument and returns an `i32`, and so the generic bound we use is `Fn(i32) -> i32`. Because `Fn` is a trait, we can bound our generic with it. In this case, our closure takes a `i32` as an argument and returns an `i32`, and so the generic bound we use is `Fn(i32) -> i32`. There\u2019s one other key point here: because we\u2019re bounding a generic with a trait, this will get monomorphized, and therefore, we\u2019ll be doing static", "commid": "rust_pr_28856"}], "negative_passages": []}
{"query_id": "q-en-rust-dca5aa8713ccc482463c7daa7fec86aae54a04f4400b3fb9be93d0f6724c246d", "query": "The first section on can be difficult to understand for novice programmers, particularly the line \"Anonymous functions that have an associated environment are called \u2018closures\u2019, because they close over an environment.\" It's difficult from this context to understand what \"environment\" is referring to.\nYeah, this definition is kind of embarassingly self-referential :sweat:\nI'm going to try to clarify this.\nfixed this", "positive_passages": [{"docid": "doc-en-rust-3d62a91e9065e5d5b760b55ff466c2f1a66b3a2acc9ea5dc0053e1b19659c41d", "text": "The error also points out that the return type is expected to be a reference, but what we are trying to return is not. Further, we cannot directly assign a `'static` lifetime to an object. So we'll take a different approach and return a \"trait object\" by `Box`ing up the `Fn`. This _almost_ works: a \u2018trait object\u2019 by `Box`ing up the `Fn`. This _almost_ works: ```rust,ignore fn factory() -> Box ChainState::Both => self.b.last().or(self.a.last()), ChainState::Both => { // Must exhaust a before b. let a_last = self.a.last(); let b_last = self.b.last(); b_last.or(a_last) }, ChainState::Front => self.a.last(), ChainState::Back => self.b.last() }", "commid": "rust_pr_28818"}], "negative_passages": []}
{"query_id": "q-en-rust-33eeb25f389e2ae0f4e0d32ecb7178698ae3fe7517eb6be1629dad80be522f67", "query": "After having first read about iterators the TRPL book, it called my attention the scarce number of matching results in the API docs for a query like \"iterator adapters\". Curiously enough, it looks like the term used there is \"adaptors\" instead of adapters. Appearances @ TRPL(): \"iterator adapters\" -7 \"iterator adaptors\" -1 (part of compilation output) Appearances @ API docs(): \"iterator adapters\" -1 (book) \"iterator adaptors\" -158 Appearances found using typical 'Search in directory' feature in a text editor. Appearances found using the site search filter in Google, ie. \"iterator adaptors\" Assuming that there aren't semantics involved between these two words in English, I'd like to know what's your opinion on the next steps to take (so we can avoid the inconsistency and possible confusion). What about replacing 'adapters' to 'adaptors' everywhere in the book?\nI assume this was fixed by ?\nYes, this issue can be safely closed now.", "positive_passages": [{"docid": "doc-en-rust-321915c765e4a76eb93458930bff97ac962d3f629b4606b3bbf4a1af9f52baa1", "text": "talk about what you do want instead. There are three broad classes of things that are relevant here: iterators, *iterator adapters*, and *consumers*. Here's some definitions: *iterator adaptors*, and *consumers*. Here's some definitions: * *iterators* give you a sequence of values. * *iterator adapters* operate on an iterator, producing a new iterator with a * *iterator adaptors* operate on an iterator, producing a new iterator with a different output sequence. * *consumers* operate on an iterator, producing some final set of values.", "commid": "rust_pr_29066"}], "negative_passages": []}
{"query_id": "q-en-rust-33eeb25f389e2ae0f4e0d32ecb7178698ae3fe7517eb6be1629dad80be522f67", "query": "After having first read about iterators the TRPL book, it called my attention the scarce number of matching results in the API docs for a query like \"iterator adapters\". Curiously enough, it looks like the term used there is \"adaptors\" instead of adapters. Appearances @ TRPL(): \"iterator adapters\" -7 \"iterator adaptors\" -1 (part of compilation output) Appearances @ API docs(): \"iterator adapters\" -1 (book) \"iterator adaptors\" -158 Appearances found using typical 'Search in directory' feature in a text editor. Appearances found using the site search filter in Google, ie. \"iterator adaptors\" Assuming that there aren't semantics involved between these two words in English, I'd like to know what's your opinion on the next steps to take (so we can avoid the inconsistency and possible confusion). What about replacing 'adapters' to 'adaptors' everywhere in the book?\nI assume this was fixed by ?\nYes, this issue can be safely closed now.", "positive_passages": [{"docid": "doc-en-rust-37ff3598971b4d3ff40cee4dcd2ced84dcf9daab85cebd24523f69aca805fea1", "text": "These two basic iterators should serve you well. There are some more advanced iterators, including ones that are infinite. That's enough about iterators. Iterator adapters are the last concept That's enough about iterators. Iterator adaptors are the last concept we need to talk about with regards to iterators. Let's get to it! ## Iterator adapters ## Iterator adaptors *Iterator adapters* take an iterator and modify it somehow, producing *Iterator adaptors* take an iterator and modify it somehow, producing a new iterator. The simplest one is called `map`: ```rust,ignore", "commid": "rust_pr_29066"}], "negative_passages": []}
{"query_id": "q-en-rust-33eeb25f389e2ae0f4e0d32ecb7178698ae3fe7517eb6be1629dad80be522f67", "query": "After having first read about iterators the TRPL book, it called my attention the scarce number of matching results in the API docs for a query like \"iterator adapters\". Curiously enough, it looks like the term used there is \"adaptors\" instead of adapters. Appearances @ TRPL(): \"iterator adapters\" -7 \"iterator adaptors\" -1 (part of compilation output) Appearances @ API docs(): \"iterator adapters\" -1 (book) \"iterator adaptors\" -158 Appearances found using typical 'Search in directory' feature in a text editor. Appearances found using the site search filter in Google, ie. \"iterator adaptors\" Assuming that there aren't semantics involved between these two words in English, I'd like to know what's your opinion on the next steps to take (so we can avoid the inconsistency and possible confusion). What about replacing 'adapters' to 'adaptors' everywhere in the book?\nI assume this was fixed by ?\nYes, this issue can be safely closed now.", "positive_passages": [{"docid": "doc-en-rust-bedaa54d1a4ae3af6fe8ebd07cf2d514a0951c6223e3ee5a9d66bad0b9646af1", "text": "If you are trying to execute a closure on an iterator for its side effects, just use `for` instead. There are tons of interesting iterator adapters. `take(n)` will return an There are tons of interesting iterator adaptors. `take(n)` will return an iterator over the next `n` elements of the original iterator. Let's try it out with an infinite iterator:", "commid": "rust_pr_29066"}], "negative_passages": []}
{"query_id": "q-en-rust-33eeb25f389e2ae0f4e0d32ecb7178698ae3fe7517eb6be1629dad80be522f67", "query": "After having first read about iterators the TRPL book, it called my attention the scarce number of matching results in the API docs for a query like \"iterator adapters\". Curiously enough, it looks like the term used there is \"adaptors\" instead of adapters. Appearances @ TRPL(): \"iterator adapters\" -7 \"iterator adaptors\" -1 (part of compilation output) Appearances @ API docs(): \"iterator adapters\" -1 (book) \"iterator adaptors\" -158 Appearances found using typical 'Search in directory' feature in a text editor. Appearances found using the site search filter in Google, ie. \"iterator adaptors\" Assuming that there aren't semantics involved between these two words in English, I'd like to know what's your opinion on the next steps to take (so we can avoid the inconsistency and possible confusion). What about replacing 'adapters' to 'adaptors' everywhere in the book?\nI assume this was fixed by ?\nYes, this issue can be safely closed now.", "positive_passages": [{"docid": "doc-en-rust-a6e20417b37e108841264de8d2c07b9dc647699309a0db8701567912bc46ece9", "text": "This will give you a vector containing `6`, `12`, `18`, `24`, and `30`. This is just a small taste of what iterators, iterator adapters, and consumers This is just a small taste of what iterators, iterator adaptors, and consumers can help you with. There are a number of really useful iterators, and you can write your own as well. Iterators provide a safe, efficient way to manipulate all kinds of lists. They're a little unusual at first, but if you play with", "commid": "rust_pr_29066"}], "negative_passages": []}
{"query_id": "q-en-rust-439dea9dabca6daedadddac0f779d53ba67564b2d695a9255e3aa7b9704a76d6", "query": "The following code compiles on stable and beta, but not nightly: For reference, the error produced by the above is: The original test case was reduced down to the above thanks to eddyb and bluss. The original problematic line was (clap \u2192 \u2192 \u2192 boom). Specifically, the issue was with the method, though the error pointed to an earlier temporary as not living long enough. If desired, I can provide a complete, in-context example of this going awry. bluss suggested that this was related to .\ncc\nHmm I wonder if the iterators involved need to have the escape hatch for dropck to them.\nHere is a variant (that has also regressed) that does not use the macro:\n(It looks like may need similar treatment too, but I am having trouble actually making a concrete test illustrating the necessity there -- it seems when I call instead of in the above test case, even stable and beta complain about lifetime issues. Maybe the old dropck is catching something I'm missing in my manual analysis of , so I'm not going to lump that in until I see evidence that it is warranted.) Update: Oh, duh, has a lifetime bound (and a impl), so of course the old dropck was going to reject it.", "positive_passages": [{"docid": "doc-en-rust-42d470b2795ab6d25736169ebd86a62c9a5eaa0f6fe2040c0992d02bfa01a165", "text": "#[stable(feature = \"rust1\", since = \"1.0.0\")] impl /// write_ten_bytes(&mut buff).unwrap(); /// write_ten_bytes_at_end(&mut buff).unwrap(); /// /// assert_eq!(&buff.get_ref()[5..15], &[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]); /// }", "commid": "rust_pr_29224"}], "negative_passages": []}
{"query_id": "q-en-rust-1425108cf54cd35afc3036671a181f95917b9e61cd95d372698b5d37b7202714", "query": "Part of These have gotten some love lately, but should still be checked out.\nI am happy with this now.", "positive_passages": [{"docid": "doc-en-rust-24bfc2158eeb67a8db90acfeb56c9b90b5eacf29cf21f58770bfdfd7304be236", "text": "// option. This file may not be copied, modified, or distributed // except according to those terms. //! A Unicode scalar value //! Unicode scalar values //! //! This module provides the `CharExt` trait, as well as its //! implementation for the primitive `char` type, in order to allow", "commid": "rust_pr_30013"}], "negative_passages": []}
{"query_id": "q-en-rust-1425108cf54cd35afc3036671a181f95917b9e61cd95d372698b5d37b7202714", "query": "Part of These have gotten some love lately, but should still be checked out.\nI am happy with this now.", "positive_passages": [{"docid": "doc-en-rust-ba1183a7d196d1a851226e35d98ea31b611e8610d2a8c642f19c2b498acd3961", "text": "/// character, as `char`s. /// /// All characters are escaped with Rust syntax of the form `u{NNNN}` /// where `NNNN` is the shortest hexadecimal representation of the code /// point. /// where `NNNN` is the shortest hexadecimal representation. /// /// # Examples /// /// Basic usage: /// /// ``` /// for c in '\u2764'.escape_unicode() { /// print!(\"{}\", c);", "commid": "rust_pr_30013"}], "negative_passages": []}
{"query_id": "q-en-rust-1425108cf54cd35afc3036671a181f95917b9e61cd95d372698b5d37b7202714", "query": "Part of These have gotten some love lately, but should still be checked out.\nI am happy with this now.", "positive_passages": [{"docid": "doc-en-rust-7f21086aeaafa839fb37b115c7dfef64a7d99ba7b8578a026e4bb508e176d7e5", "text": "/// /// # Examples /// /// Basic usage: /// /// ``` /// let n = '\u00df'.len_utf16(); /// assert_eq!(n, 1);", "commid": "rust_pr_30013"}], "negative_passages": []}
{"query_id": "q-en-rust-1425108cf54cd35afc3036671a181f95917b9e61cd95d372698b5d37b7202714", "query": "Part of These have gotten some love lately, but should still be checked out.\nI am happy with this now.", "positive_passages": [{"docid": "doc-en-rust-0e029c1231faf4e20a4e9e07713290da95d4ceba44a0343a84df88343b621ac6", "text": "/// /// # Examples /// /// Basic usage: /// /// ``` /// #![feature(decode_utf16)] ///", "commid": "rust_pr_30013"}], "negative_passages": []}
{"query_id": "q-en-rust-1425108cf54cd35afc3036671a181f95917b9e61cd95d372698b5d37b7202714", "query": "Part of These have gotten some love lately, but should still be checked out.\nI am happy with this now.", "positive_passages": [{"docid": "doc-en-rust-70ff6a0999fad25b6f87651b9b24f6494e7226b97000f648f809918854311e0a", "text": "#[doc(primitive = \"char\")] // /// A Unicode scalar value. /// A character type. /// /// A `char` represents a /// *[Unicode scalar /// value](http://www.unicode.org/glossary/#unicode_scalar_value)*, as it can /// contain any Unicode code point except high-surrogate and low-surrogate code /// points. /// The `char` type represents a single character. More specifically, since /// 'character' isn't a well-defined concept in Unicode, `char` is a '[Unicode /// scalar value]', which is similar to, but not the same as, a '[Unicode code /// point]'. /// /// As such, only values in the ranges [0x0,0xD7FF] and [0xE000,0x10FFFF] /// (inclusive) are allowed. A `char` can always be safely cast to a `u32`; /// however the converse is not always true due to the above range limits /// and, as such, should be performed via the `from_u32` function. /// [Unicode scalar value]: http://www.unicode.org/glossary/#unicode_scalar_value /// [Unicode code point]: http://www.unicode.org/glossary/#code_point /// /// *[See also the `std::char` module](char/index.html).* /// This documentation describes a number of methods and trait implementations on the /// `char` type. For technical reasons, there is additional, separate /// documentation in [the `std::char` module](char/index.html) as well. /// /// # Representation /// /// `char` is always four bytes in size. This is a different representation than /// a given character would have as part of a [`String`], for example: /// /// ``` /// let v = vec!['h', 'e', 'l', 'l', 'o']; /// /// // five elements times four bytes for each element /// assert_eq!(20, v.len() * std::mem::size_of:: use trans::common::{type_is_sized, ExprOrMethodCall, node_id_substs, C_nil, const_get_elt}; use trans::common::{C_struct, C_undef, const_to_opt_int, const_to_opt_uint, VariantInfo, C_uint}; use trans::common::{type_is_fat_ptr, Field, C_vector, C_array, C_null, ExprId, MethodCallKey}; use trans::declare;", "commid": "rust_pr_29759"}], "negative_passages": []}
{"query_id": "q-en-rust-debca2743ad5a324e6cb531ac89145875900291930561b799afd49cbc4766892", "query": "We need to support references to static items in lvalues. This would be used for accessing a static lock or assigning to a static mut.", "positive_passages": [{"docid": "doc-en-rust-769c3e38081c07d4171195c44d723e3b24835ab406aa9f4e028ce12966c48d9b", "text": "} let opt_def = cx.tcx().def_map.borrow().get(&cur.id).map(|d| d.full_def()); if let Some(def::DefStatic(def_id, _)) = opt_def { get_static_val(cx, def_id, ety) common::get_static_val(cx, def_id, ety) } else { // If this isn't the address of a static, then keep going through // normal constant evaluation.", "commid": "rust_pr_29759"}], "negative_passages": []}
{"query_id": "q-en-rust-debca2743ad5a324e6cb531ac89145875900291930561b799afd49cbc4766892", "query": "We need to support references to static items in lvalues. This would be used for accessing a static lock or assigning to a static mut.", "positive_passages": [{"docid": "doc-en-rust-2adc89f506d8365cec35c37ce8783c4ff9fc59b567ecc36fbfbd16fe412e60cd", "text": "Ok(g) } } fn get_static_val<'a, 'tcx>(ccx: &CrateContext<'a, 'tcx>, did: DefId, ty: Ty<'tcx>) -> ValueRef { if let Some(node_id) = ccx.tcx().map.as_local_node_id(did) { base::get_item_val(ccx, node_id) } else { base::trans_external_path(ccx, did, ty) } } ", "commid": "rust_pr_29759"}], "negative_passages": []}
{"query_id": "q-en-rust-debca2743ad5a324e6cb531ac89145875900291930561b799afd49cbc4766892", "query": "We need to support references to static items in lvalues. This would be used for accessing a static lock or assigning to a static mut.", "positive_passages": [{"docid": "doc-en-rust-feb5dded6f41bd92b336e5c9eebfff6173cbe72d3ff59f40d981f3dbb67c0e40", "text": "DatumBlock::new(bcx, datum.to_expr_datum()) } def::DefStatic(did, _) => { // There are two things that may happen here: // 1) If the static item is defined in this crate, it will be // translated using `get_item_val`, and we return a pointer to // the result. // 2) If the static item is defined in another crate then we add // (or reuse) a declaration of an external global, and return a // pointer to that. let const_ty = expr_ty(bcx, ref_expr); // For external constants, we don't inline. let val = if let Some(node_id) = bcx.tcx().map.as_local_node_id(did) { // Case 1. // The LLVM global has the type of its initializer, // which may not be equal to the enum's type for // non-C-like enums. let val = base::get_item_val(bcx.ccx(), node_id); let pty = type_of::type_of(bcx.ccx(), const_ty).ptr_to(); PointerCast(bcx, val, pty) } else { // Case 2. base::get_extern_const(bcx.ccx(), did, const_ty) }; let val = get_static_val(bcx.ccx(), did, const_ty); let lval = Lvalue::new(\"expr::trans_def\"); DatumBlock::new(bcx, Datum::new(val, const_ty, LvalueExpr(lval))) }", "commid": "rust_pr_29759"}], "negative_passages": []}
{"query_id": "q-en-rust-debca2743ad5a324e6cb531ac89145875900291930561b799afd49cbc4766892", "query": "We need to support references to static items in lvalues. This would be used for accessing a static lock or assigning to a static mut.", "positive_passages": [{"docid": "doc-en-rust-a871a9726838a6af1fa7e17538c18c790f74751a3f848cc6c1d74f04b246af2d", "text": "tcx.sess.bug(&format!(\"using operand temp {:?} as lvalue\", lvalue)), }, mir::Lvalue::Arg(index) => self.args[index as usize], mir::Lvalue::Static(_def_id) => unimplemented!(), mir::Lvalue::Static(def_id) => { let const_ty = self.mir.lvalue_ty(tcx, lvalue); LvalueRef::new(common::get_static_val(ccx, def_id, const_ty.to_ty(tcx)), const_ty) }, mir::Lvalue::ReturnPointer => { let return_ty = bcx.monomorphize(&self.mir.return_ty); let llval = fcx.get_ret_slot(bcx, return_ty, \"return\");", "commid": "rust_pr_29759"}], "negative_passages": []}
{"query_id": "q-en-rust-928cd822b2040ff1cc4c93f9d2fb8aa51ca8572bc50f30533405cf5c07731993", "query": "The presence in a Rust source file of unusual but useful kinds of whitespace, such as ASCII 0x0C (form feed), leads to the following error: I have a specific use case for form-feeds in source files. But I think in general it is nice to ignore the same whitespace that every other programming language and file format ignores; it lessens confusion for people coming from other languages and backgrounds. My specific use case is the long-standing, but somewhat uncommon use of the form-feed character (which semantically is a separator between pages of text) as a way to group together especially closely related functions or blocks in a file of source code. Text editors or IDEs such as vim, Emacs or XCode provide convenience features to display these form-feeds in aesthetically pleasing way, move between form-feed-delimited pages, and restrict editing to one form-feed-delimited page at a time. It's just a simple convenience feature, but it would really be nice to support it.\n+1, I also use this feature and was disappointed when I found Rust didn't treat it as whitespace.\nLooks like a fairly simple change could be made to the lexer so it uses instead of limiting to . The only think I can think of is that the function in has been around since before we had a better function and nobody has changed it since then.\n/cc , do we want to accept all kinds of whitespace?\nI believe we should, yes.", "positive_passages": [{"docid": "doc-en-rust-cb37be509f1015ad3a82b3608f8bc9a75babb2110990b82cd117036d1aa2a613", "text": "[rust]: https://www.rust-lang.org \u201cThe Rust Programming Language\u201d is split into eight sections. This introduction \u201cThe Rust Programming Language\u201d is split into sections. This introduction is the first. After this: * [Getting started][gs] - Set up your computer for Rust development. * [Learn Rust][lr] - Learn Rust programming through small projects. * [Effective Rust][er] - Higher-level concepts for writing excellent Rust code. * [Tutorial: Guessing Game][gg] - Learn some Rust with a small project. * [Syntax and Semantics][ss] - Each bit of Rust, broken down into small chunks. * [Effective Rust][er] - Higher-level concepts for writing excellent Rust code. * [Nightly Rust][nr] - Cutting-edge features that aren\u2019t in stable builds yet. * [Glossary][gl] - A reference of terms used in the book. * [Bibliography][bi] - Background on Rust's influences, papers about Rust. [gs]: getting-started.html [lr]: learn-rust.html [gg]: guessing-game.html [er]: effective-rust.html [ss]: syntax-and-semantics.html [nr]: nightly-rust.html [gl]: glossary.html [bi]: bibliography.html After reading this introduction, you\u2019ll want to dive into either \u2018Learn Rust\u2019 or \u2018Syntax and Semantics\u2019, depending on your preference: \u2018Learn Rust\u2019 if you want to dive in with a project, or \u2018Syntax and Semantics\u2019 if you prefer to start small, and learn a single concept thoroughly before moving onto the next. Copious cross-linking connects these parts together. ### Contributing The source files from which this book is generated can be found on", "commid": "rust_pr_30595"}], "negative_passages": []}
{"query_id": "q-en-rust-928cd822b2040ff1cc4c93f9d2fb8aa51ca8572bc50f30533405cf5c07731993", "query": "The presence in a Rust source file of unusual but useful kinds of whitespace, such as ASCII 0x0C (form feed), leads to the following error: I have a specific use case for form-feeds in source files. But I think in general it is nice to ignore the same whitespace that every other programming language and file format ignores; it lessens confusion for people coming from other languages and backgrounds. My specific use case is the long-standing, but somewhat uncommon use of the form-feed character (which semantically is a separator between pages of text) as a way to group together especially closely related functions or blocks in a file of source code. Text editors or IDEs such as vim, Emacs or XCode provide convenience features to display these form-feeds in aesthetically pleasing way, move between form-feed-delimited pages, and restrict editing to one form-feed-delimited page at a time. It's just a simple convenience feature, but it would really be nice to support it.\n+1, I also use this feature and was disappointed when I found Rust didn't treat it as whitespace.\nLooks like a fairly simple change could be made to the lexer so it uses instead of limiting to . The only think I can think of is that the function in has been around since before we had a better function and nobody has changed it since then.\n/cc , do we want to accept all kinds of whitespace?\nI believe we should, yes.", "positive_passages": [{"docid": "doc-en-rust-a11cc19f04b6df079bcae0078fc7ddc7655cc41746b6ae9775d3cf0d725aef49", "text": "# Summary * [Getting Started](getting-started.md) * [Learn Rust](learn-rust.md) * [Guessing Game](guessing-game.md) * [Dining Philosophers](dining-philosophers.md) * [Rust Inside Other Languages](rust-inside-other-languages.md) * [Tutorial: Guessing Game](guessing-game.md) * [Syntax and Semantics](syntax-and-semantics.md) * [Variable Bindings](variable-bindings.md) * [Functions](functions.md)", "commid": "rust_pr_30595"}], "negative_passages": []}
{"query_id": "q-en-rust-928cd822b2040ff1cc4c93f9d2fb8aa51ca8572bc50f30533405cf5c07731993", "query": "The presence in a Rust source file of unusual but useful kinds of whitespace, such as ASCII 0x0C (form feed), leads to the following error: I have a specific use case for form-feeds in source files. But I think in general it is nice to ignore the same whitespace that every other programming language and file format ignores; it lessens confusion for people coming from other languages and backgrounds. My specific use case is the long-standing, but somewhat uncommon use of the form-feed character (which semantically is a separator between pages of text) as a way to group together especially closely related functions or blocks in a file of source code. Text editors or IDEs such as vim, Emacs or XCode provide convenience features to display these form-feeds in aesthetically pleasing way, move between form-feed-delimited pages, and restrict editing to one form-feed-delimited page at a time. It's just a simple convenience feature, but it would really be nice to support it.\n+1, I also use this feature and was disappointed when I found Rust didn't treat it as whitespace.\nLooks like a fairly simple change could be made to the lexer so it uses instead of limiting to . The only think I can think of is that the function in has been around since before we had a better function and nobody has changed it since then.\n/cc , do we want to accept all kinds of whitespace?\nI believe we should, yes.", "positive_passages": [{"docid": "doc-en-rust-43c553a0cb0b3d9660ff937229b67968c4bd6faaad1f931ce501069bb2ef3536", "text": " % Dining Philosophers For our second project, let\u2019s look at a classic concurrency problem. It\u2019s called \u2018the dining philosophers\u2019. It was originally conceived by Dijkstra in 1965, but we\u2019ll use a lightly adapted version from [this paper][paper] by Tony Hoare in 1985. [paper]: http://www.usingcsp.com/cspbook.pdf > In ancient times, a wealthy philanthropist endowed a College to accommodate > five eminent philosophers. Each philosopher had a room in which they could > engage in their professional activity of thinking; there was also a common > dining room, furnished with a circular table, surrounded by five chairs, each > labelled by the name of the philosopher who was to sit in it. They sat > anticlockwise around the table. To the left of each philosopher there was > laid a golden fork, and in the center stood a large bowl of spaghetti, which > was constantly replenished. A philosopher was expected to spend most of > their time thinking; but when they felt hungry, they went to the dining > room, sat down in their own chair, picked up their own fork on their left, > and plunged it into the spaghetti. But such is the tangled nature of > spaghetti that a second fork is required to carry it to the mouth. The > philosopher therefore had also to pick up the fork on their right. When > they were finished they would put down both their forks, get up from their > chair, and continue thinking. Of course, a fork can be used by only one > philosopher at a time. If the other philosopher wants it, they just have > to wait until the fork is available again. This classic problem shows off a few different elements of concurrency. The reason is that it's actually slightly tricky to implement: a simple implementation can deadlock. For example, let's consider a simple algorithm that would solve this problem: 1. A philosopher picks up the fork on their left. 2. They then pick up the fork on their right. 3. They eat. 4. They return the forks. Now, let\u2019s imagine this sequence of events: 1. Philosopher 1 begins the algorithm, picking up the fork on their left. 2. Philosopher 2 begins the algorithm, picking up the fork on their left. 3. Philosopher 3 begins the algorithm, picking up the fork on their left. 4. Philosopher 4 begins the algorithm, picking up the fork on their left. 5. Philosopher 5 begins the algorithm, picking up the fork on their left. 6. ... ? All the forks are taken, but nobody can eat! There are different ways to solve this problem. We\u2019ll get to our solution in the tutorial itself. For now, let\u2019s get started and create a new project with `cargo`: ```bash $ cd ~/projects $ cargo new dining_philosophers --bin $ cd dining_philosophers ``` Now we can start modeling the problem itself. We\u2019ll start with the philosophers in `src/main.rs`: ```rust struct Philosopher { name: String, } impl Philosopher { fn new(name: &str) -> Philosopher { Philosopher { name: name.to_string(), } } } fn main() { let p1 = Philosopher::new(\"Judith Butler\"); let p2 = Philosopher::new(\"Gilles Deleuze\"); let p3 = Philosopher::new(\"Karl Marx\"); let p4 = Philosopher::new(\"Emma Goldman\"); let p5 = Philosopher::new(\"Michel Foucault\"); } ``` Here, we make a [`struct`][struct] to represent a philosopher. For now, a name is all we need. We choose the [`String`][string] type for the name, rather than `&str`. Generally speaking, working with a type which owns its data is easier than working with one that uses references. [struct]: structs.html [string]: strings.html Let\u2019s continue: ```rust # struct Philosopher { # name: String, # } impl Philosopher { fn new(name: &str) -> Philosopher { Philosopher { name: name.to_string(), } } } ``` This `impl` block lets us define things on `Philosopher` structs. In this case, we define an \u2018associated function\u2019 called `new`. The first line looks like this: ```rust # struct Philosopher { # name: String, # } # impl Philosopher { fn new(name: &str) -> Philosopher { # Philosopher { # name: name.to_string(), # } # } # } ``` We take one argument, a `name`, of type `&str`. This is a reference to another string. It returns an instance of our `Philosopher` struct. ```rust # struct Philosopher { # name: String, # } # impl Philosopher { # fn new(name: &str) -> Philosopher { Philosopher { name: name.to_string(), } # } # } ``` This creates a new `Philosopher`, and sets its `name` to our `name` argument. Not just the argument itself, though, as we call `.to_string()` on it. This will create a copy of the string that our `&str` points to, and give us a new `String`, which is the type of the `name` field of `Philosopher`. Why not accept a `String` directly? It\u2019s nicer to call. If we took a `String`, but our caller had a `&str`, they\u2019d have to call this method themselves. The downside of this flexibility is that we _always_ make a copy. For this small program, that\u2019s not particularly important, as we know we\u2019ll just be using short strings anyway. One last thing you\u2019ll notice: we just define a `Philosopher`, and seemingly don\u2019t do anything with it. Rust is an \u2018expression based\u2019 language, which means that almost everything in Rust is an expression which returns a value. This is true of functions as well \u2014 the last expression is automatically returned. Since we create a new `Philosopher` as the last expression of this function, we end up returning it. This name, `new()`, isn\u2019t anything special to Rust, but it is a convention for functions that create new instances of structs. Before we talk about why, let\u2019s look at `main()` again: ```rust # struct Philosopher { # name: String, # } # # impl Philosopher { # fn new(name: &str) -> Philosopher { # Philosopher { # name: name.to_string(), # } # } # } # fn main() { let p1 = Philosopher::new(\"Judith Butler\"); let p2 = Philosopher::new(\"Gilles Deleuze\"); let p3 = Philosopher::new(\"Karl Marx\"); let p4 = Philosopher::new(\"Emma Goldman\"); let p5 = Philosopher::new(\"Michel Foucault\"); } ``` Here, we create five variable bindings with five new philosophers. If we _didn\u2019t_ define that `new()` function, it would look like this: ```rust # struct Philosopher { # name: String, # } fn main() { let p1 = Philosopher { name: \"Judith Butler\".to_string() }; let p2 = Philosopher { name: \"Gilles Deleuze\".to_string() }; let p3 = Philosopher { name: \"Karl Marx\".to_string() }; let p4 = Philosopher { name: \"Emma Goldman\".to_string() }; let p5 = Philosopher { name: \"Michel Foucault\".to_string() }; } ``` That\u2019s much noisier. Using `new` has other advantages too, but even in this simple case, it ends up being nicer to use. Now that we\u2019ve got the basics in place, there\u2019s a number of ways that we can tackle the broader problem here. I like to start from the end first: let\u2019s set up a way for each philosopher to finish eating. As a tiny step, let\u2019s make a method, and then loop through all the philosophers, calling it: ```rust struct Philosopher { name: String, } impl Philosopher { fn new(name: &str) -> Philosopher { Philosopher { name: name.to_string(), } } fn eat(&self) { println!(\"{} is done eating.\", self.name); } } fn main() { let philosophers = vec![ Philosopher::new(\"Judith Butler\"), Philosopher::new(\"Gilles Deleuze\"), Philosopher::new(\"Karl Marx\"), Philosopher::new(\"Emma Goldman\"), Philosopher::new(\"Michel Foucault\"), ]; for p in &philosophers { p.eat(); } } ``` Let\u2019s look at `main()` first. Rather than have five individual variable bindings for our philosophers, we make a `Vec", "commid": "rust_pr_30595"}], "negative_passages": []}
{"query_id": "q-en-rust-928cd822b2040ff1cc4c93f9d2fb8aa51ca8572bc50f30533405cf5c07731993", "query": "The presence in a Rust source file of unusual but useful kinds of whitespace, such as ASCII 0x0C (form feed), leads to the following error: I have a specific use case for form-feeds in source files. But I think in general it is nice to ignore the same whitespace that every other programming language and file format ignores; it lessens confusion for people coming from other languages and backgrounds. My specific use case is the long-standing, but somewhat uncommon use of the form-feed character (which semantically is a separator between pages of text) as a way to group together especially closely related functions or blocks in a file of source code. Text editors or IDEs such as vim, Emacs or XCode provide convenience features to display these form-feeds in aesthetically pleasing way, move between form-feed-delimited pages, and restrict editing to one form-feed-delimited page at a time. It's just a simple convenience feature, but it would really be nice to support it.\n+1, I also use this feature and was disappointed when I found Rust didn't treat it as whitespace.\nLooks like a fairly simple change could be made to the lexer so it uses instead of limiting to . The only think I can think of is that the function in has been around since before we had a better function and nobody has changed it since then.\n/cc , do we want to accept all kinds of whitespace?\nI believe we should, yes.", "positive_passages": [{"docid": "doc-en-rust-b61ee9be3fa762c7b8fd2212493de15737331d7996e4ec0bd013c7cdff59669a", "text": "% Guessing Game For our first project, we\u2019ll implement a classic beginner programming problem: the guessing game. Here\u2019s how it works: Our program will generate a random integer between one and a hundred. It will then prompt us to enter a guess. Upon entering our guess, it will tell us if we\u2019re too low or too high. Once we guess correctly, it will congratulate us. Sounds good? Let\u2019s learn some Rust! For our first project, we\u2019ll implement a classic beginner programming problem: the guessing game. Here\u2019s how it works: Our program will generate a random integer between one and a hundred. It will then prompt us to enter a guess. Upon entering our guess, it will tell us if we\u2019re too low or too high. Once we guess correctly, it will congratulate us. Sounds good? Along the way, we\u2019ll learn a little bit about Rust. The next section, \u2018Syntax and Semantics\u2019, will dive deeper into each part. # Set up", "commid": "rust_pr_30595"}], "negative_passages": []}
{"query_id": "q-en-rust-928cd822b2040ff1cc4c93f9d2fb8aa51ca8572bc50f30533405cf5c07731993", "query": "The presence in a Rust source file of unusual but useful kinds of whitespace, such as ASCII 0x0C (form feed), leads to the following error: I have a specific use case for form-feeds in source files. But I think in general it is nice to ignore the same whitespace that every other programming language and file format ignores; it lessens confusion for people coming from other languages and backgrounds. My specific use case is the long-standing, but somewhat uncommon use of the form-feed character (which semantically is a separator between pages of text) as a way to group together especially closely related functions or blocks in a file of source code. Text editors or IDEs such as vim, Emacs or XCode provide convenience features to display these form-feeds in aesthetically pleasing way, move between form-feed-delimited pages, and restrict editing to one form-feed-delimited page at a time. It's just a simple convenience feature, but it would really be nice to support it.\n+1, I also use this feature and was disappointed when I found Rust didn't treat it as whitespace.\nLooks like a fairly simple change could be made to the lexer so it uses instead of limiting to . The only think I can think of is that the function in has been around since before we had a better function and nobody has changed it since then.\n/cc , do we want to accept all kinds of whitespace?\nI believe we should, yes.", "positive_passages": [{"docid": "doc-en-rust-ee9291f62ecc808e147f53e7bf560f443fbc7603a7c3fdef646df7cecd834766", "text": " % Rust Inside Other Languages For our third project, we\u2019re going to choose something that shows off one of Rust\u2019s greatest strengths: a lack of a substantial runtime. As organizations grow, they increasingly rely on a multitude of programming languages. Different programming languages have different strengths and weaknesses, and a polyglot stack lets you use a particular language where its strengths make sense and a different one where it\u2019s weak. A very common area where many programming languages are weak is in runtime performance of programs. Often, using a language that is slower, but offers greater programmer productivity, is a worthwhile trade-off. To help mitigate this, they provide a way to write some of your system in C and then call that C code as though it were written in the higher-level language. This is called a \u2018foreign function interface\u2019, often shortened to \u2018FFI\u2019. Rust has support for FFI in both directions: it can call into C code easily, but crucially, it can also be called _into_ as easily as C. Combined with Rust\u2019s lack of a garbage collector and low runtime requirements, this makes Rust a great candidate to embed inside of other languages when you need that extra oomph. There is a whole [chapter devoted to FFI][ffi] and its specifics elsewhere in the book, but in this chapter, we\u2019ll examine this particular use-case of FFI, with examples in Ruby, Python, and JavaScript. [ffi]: ffi.html # The problem There are many different projects we could choose here, but we\u2019re going to pick an example where Rust has a clear advantage over many other languages: numeric computing and threading. Many languages, for the sake of consistency, place numbers on the heap, rather than on the stack. Especially in languages that focus on object-oriented programming and use garbage collection, heap allocation is the default. Sometimes optimizations can stack allocate particular numbers, but rather than relying on an optimizer to do its job, we may want to ensure that we\u2019re always using primitive number types rather than some sort of object type. Second, many languages have a \u2018global interpreter lock\u2019 (GIL), which limits concurrency in many situations. This is done in the name of safety, which is a positive effect, but it limits the amount of work that can be done at the same time, which is a big negative. To emphasize these two aspects, we\u2019re going to create a little project that uses these two aspects heavily. Since the focus of the example is to embed Rust into other languages, rather than the problem itself, we\u2019ll just use a toy example: > Start ten threads. Inside each thread, count from one to five million. After > all ten threads are finished, print out \u2018done!\u2019. I chose five million based on my particular computer. Here\u2019s an example of this code in Ruby: ```ruby threads = [] 10.times do threads << Thread.new do count = 0 5_000_000.times do count += 1 end count end end threads.each do |t| puts \"Thread finished with count=#{t.value}\" end puts \"done!\" ``` Try running this example, and choose a number that runs for a few seconds. Depending on your computer\u2019s hardware, you may have to increase or decrease the number. On my system, running this program takes `2.156` seconds. And, if I use some sort of process monitoring tool, like `top`, I can see that it only uses one core on my machine. That\u2019s the GIL kicking in. While it\u2019s true that this is a synthetic program, one can imagine many problems that are similar to this in the real world. For our purposes, spinning up a few busy threads represents some sort of parallel, expensive computation. # A Rust library Let\u2019s rewrite this problem in Rust. First, let\u2019s make a new project with Cargo: ```bash $ cargo new embed $ cd embed ``` This program is fairly easy to write in Rust: ```rust use std::thread; fn process() { let handles: Vec<_> = (0..10).map(|_| { thread::spawn(|| { let mut x = 0; for _ in 0..5_000_000 { x += 1 } x }) }).collect(); for h in handles { println!(\"Thread finished with count={}\", h.join().map_err(|_| \"Could not join a thread!\").unwrap()); } } ``` Some of this should look familiar from previous examples. We spin up ten threads, collecting them into a `handles` vector. Inside of each thread, we loop five million times, and add one to `x` each time. Finally, we join on each thread. Right now, however, this is a Rust library, and it doesn\u2019t expose anything that\u2019s callable from C. If we tried to hook this up to another language right now, it wouldn\u2019t work. We only need to make two small changes to fix this, though. The first is to modify the beginning of our code: ```rust,ignore #[no_mangle] pub extern fn process() { ``` We have to add a new attribute, `no_mangle`. When you create a Rust library, it changes the name of the function in the compiled output. The reasons for this are outside the scope of this tutorial, but in order for other languages to know how to call the function, we can\u2019t do that. This attribute turns that behavior off. The other change is the `pub extern`. The `pub` means that this function should be callable from outside of this module, and the `extern` says that it should be able to be called from C. That\u2019s it! Not a whole lot of change. The second thing we need to do is to change a setting in our `Cargo.toml`. Add this at the bottom: ```toml [lib] name = \"embed\" crate-type = [\"dylib\"] ``` This tells Rust that we want to compile our library into a standard dynamic library. By default, Rust compiles an \u2018rlib\u2019, a Rust-specific format. Let\u2019s build the project now: ```bash $ cargo build --release Compiling embed v0.1.0 (file:///home/steve/src/embed) ``` We\u2019ve chosen `cargo build --release`, which builds with optimizations on. We want this to be as fast as possible! You can find the output of the library in `target/release`: ```bash $ ls target/release/ build deps examples libembed.so native ``` That `libembed.so` is our \u2018shared object\u2019 library. We can use this file just like any shared object library written in C! As an aside, this may be `embed.dll` (Microsoft Windows) or `libembed.dylib` (Mac OS X), depending on your operating system. Now that we\u2019ve got our Rust library built, let\u2019s use it from our Ruby. # Ruby Open up an `embed.rb` file inside of our project, and do this: ```ruby require 'ffi' module Hello extend FFI::Library ffi_lib 'target/release/libembed.so' attach_function :process, [], :void end Hello.process puts 'done!' ``` Before we can run this, we need to install the `ffi` gem: ```bash $ gem install ffi # this may need sudo Fetching: ffi-1.9.8.gem (100%) Building native extensions. This could take a while... Successfully installed ffi-1.9.8 Parsing documentation for ffi-1.9.8 Installing ri documentation for ffi-1.9.8 Done installing documentation for ffi after 0 seconds 1 gem installed ``` And finally, we can try running it: ```bash $ ruby embed.rb Thread finished with count=5000000 Thread finished with count=5000000 Thread finished with count=5000000 Thread finished with count=5000000 Thread finished with count=5000000 Thread finished with count=5000000 Thread finished with count=5000000 Thread finished with count=5000000 Thread finished with count=5000000 Thread finished with count=5000000 done! done! $ ``` Whoa, that was fast! On my system, this took `0.086` seconds, rather than the two seconds the pure Ruby version took. Let\u2019s break down this Ruby code: ```ruby require 'ffi' ``` We first need to require the `ffi` gem. This lets us interface with our Rust library like a C library. ```ruby module Hello extend FFI::Library ffi_lib 'target/release/libembed.so' ``` The `Hello` module is used to attach the native functions from the shared library. Inside, we `extend` the necessary `FFI::Library` module and then call `ffi_lib` to load up our shared object library. We just pass it the path that our library is stored, which, as we saw before, is `target/release/libembed.so`. ```ruby attach_function :process, [], :void ``` The `attach_function` method is provided by the FFI gem. It\u2019s what connects our `process()` function in Rust to a Ruby function of the same name. Since `process()` takes no arguments, the second parameter is an empty array, and since it returns nothing, we pass `:void` as the final argument. ```ruby Hello.process ``` This is the actual call into Rust. The combination of our `module` and the call to `attach_function` sets this all up. It looks like a Ruby function but is actually Rust! ```ruby puts 'done!' ``` Finally, as per our project\u2019s requirements, we print out `done!`. That\u2019s it! As we\u2019ve seen, bridging between the two languages is really easy, and buys us a lot of performance. Next, let\u2019s try Python! # Python Create an `embed.py` file in this directory, and put this in it: ```python from ctypes import cdll lib = cdll.LoadLibrary(\"target/release/libembed.so\") lib.process() print(\"done!\") ``` Even easier! We use `cdll` from the `ctypes` module. A quick call to `LoadLibrary` later, and we can call `process()`. On my system, this takes `0.017` seconds. Speedy! # Node.js Node isn\u2019t a language, but it\u2019s currently the dominant implementation of server-side JavaScript. In order to do FFI with Node, we first need to install the library: ```bash $ npm install ffi ``` After that installs, we can use it: ```javascript var ffi = require('ffi'); var lib = ffi.Library('target/release/libembed', { 'process': ['void', []] }); lib.process(); console.log(\"done!\"); ``` It looks more like the Ruby example than the Python example. We use the `ffi` module to get access to `ffi.Library()`, which loads up our shared object. We need to annotate the return type and argument types of the function, which are `void` for return and an empty array to signify no arguments. From there, we just call it and print the result. On my system, this takes a quick `0.092` seconds. # Conclusion As you can see, the basics of doing this are _very_ easy. Of course, there's a lot more that we could do here. Check out the [FFI][ffi] chapter for more details. ", "commid": "rust_pr_30595"}], "negative_passages": []}
{"query_id": "q-en-rust-b42b6ef93bee4d7180bb7d77309475fb36bc4cc78a287c17c9828feb5e3fa4cb", "query": "On currently nightly, I get: () Yet the description for this error was merged weeks ago:\ncc , in the future when adding new diagnostics mods be sure to add a call", "positive_passages": [{"docid": "doc-en-rust-f4d6ec07215a1a32663b89912b6987b6ce425def3d306872ad55a6423dedac7f", "text": "``` ptr::read(&v as *const _ as *const SomeType) // `v` transmuted to `SomeType` ``` Note that this does not move `v` (unlike `transmute`), and may need a call to `mem::forget(v)` in case you want to avoid destructors being called. \"##, E0152: r##\"", "commid": "rust_pr_29980"}], "negative_passages": []}
{"query_id": "q-en-rust-e2a0c87d80579f0269081877ec7f7a2629dabf28513179d76c0e1d76683fc4cd", "query": "Sorting Ipv4Addrs results in unexpected ordering. Ordering is based on the internal representation of the Ipv4Addr without regard to \"network order\". I tried this code on little-endian architecture: I expected to see this happen: Instead, this happened: Rust playground demonstration:\nIpv6Addr has a similar issue: outputs:", "positive_passages": [{"docid": "doc-en-rust-6636f990a39255ba74688b8fb175988ea52afdbab1647bcbf476ad7d73749fdf", "text": "#[stable(feature = \"rust1\", since = \"1.0.0\")] impl Ord for Ipv4Addr { fn cmp(&self, other: &Ipv4Addr) -> Ordering { self.inner.s_addr.cmp(&other.inner.s_addr) self.octets().cmp(&other.octets()) } }", "commid": "rust_pr_29724"}], "negative_passages": []}
{"query_id": "q-en-rust-e2a0c87d80579f0269081877ec7f7a2629dabf28513179d76c0e1d76683fc4cd", "query": "Sorting Ipv4Addrs results in unexpected ordering. Ordering is based on the internal representation of the Ipv4Addr without regard to \"network order\". I tried this code on little-endian architecture: I expected to see this happen: Instead, this happened: Rust playground demonstration:\nIpv6Addr has a similar issue: outputs:", "positive_passages": [{"docid": "doc-en-rust-ae57fb92b8300ccdde4af82f61abb84f8d987019d221aa0f9515fbe51c9272bf", "text": "#[stable(feature = \"rust1\", since = \"1.0.0\")] impl Ord for Ipv6Addr { fn cmp(&self, other: &Ipv6Addr) -> Ordering { self.inner.s6_addr.cmp(&other.inner.s6_addr) self.segments().cmp(&other.segments()) } }", "commid": "rust_pr_29724"}], "negative_passages": []}
{"query_id": "q-en-rust-e2a0c87d80579f0269081877ec7f7a2629dabf28513179d76c0e1d76683fc4cd", "query": "Sorting Ipv4Addrs results in unexpected ordering. Ordering is based on the internal representation of the Ipv4Addr without regard to \"network order\". I tried this code on little-endian architecture: I expected to see this happen: Instead, this happened: Rust playground demonstration:\nIpv6Addr has a similar issue: outputs:", "positive_passages": [{"docid": "doc-en-rust-21c02b5cc2cd4d8c1ba6360d77af994dac635dd907712c91e935a84f09769917", "text": "let a = Ipv4Addr::new(127, 0, 0, 1); assert_eq!(Ipv4Addr::from(2130706433), a); } #[test] fn ord() { assert!(Ipv4Addr::new(100, 64, 3, 3) < Ipv4Addr::new(192, 0, 2, 2)); assert!(\"2001:db8:f00::1002\".parse:: { Var = { let x: S = 0; //~ ERROR: mismatched types 0 }, } fn main() {} ", "commid": "rust_pr_71952"}], "negative_passages": []}
{"query_id": "q-en-rust-c06e64e4379da7896ed0629280583c22a3a67c292ea07c43958847cb5d464ea3", "query": "Given this Rust code: A call to generates IR like this on x86_64 Linux: So we have a 12 byte alloca from which we then read 16 bytes.\nIsn't this subject to alignment by LLVM?\nwhat do you mean? The alloca is only 12 bytes large. That's also the amount of bytes we write in the , but then we ask to read data for a type that is 16 bytes large. I don't see how alignment is relevant here.\nhere's a demo: Build without optimizations on x86_64 Linux, and you should be able to observe in the C code, although it hasn't been passed as an argument. The mismatch in the struct type definitions is deliberate to show the problem.\nReading past the end of an alloca has well-defined behavior in LLVM IR. I guess we might end up with false positives if we ever support msan.\nare you implying that the generated IR is fine? If so, what is the defined behaviour / where can I read up on that? reads as if it is undefined. Without optimizations, this gives random output: With optimizations, SROA ignores the out-of-bounds read and emits a warning in debug mode, which doesn't seem like it is well-defined behaviour to me either.\nBasically, if a pointer points to valid memory, and is sufficiently aligned, you can load bytes up to the alignment... LLVM itself does this at etc. That said, looking a bit more closely, the given testcase performs an \"align 8\" load from an \"align 4\" pointer, which is more problematic.\nThanks! That looks like it just assumes that the load won't trap. Part of my concern here is that we might expose additional data this way, like observing in that C code example. Though I guess optimizations might eventually leave the upper 32bits undefined and expose data anyway, so this might be moot. The alignment problem is probably due to bad usage of . Ultimately, we should load and pass the struct elements individually anyway.\nThis has been fixed, we now only load 12 bytes: instead of . Marking as E-needstest.", "positive_passages": [{"docid": "doc-en-rust-c63951d94861c953072be302539c9e4563b619eb81d8e12ec352648e834e6a4f", "text": " error[E0308]: mismatched types --> $DIR/issue-67945-1.rs:3:20 | LL | enum Bug { | - this type parameter LL | Var = { LL | let x: S = 0; | - ^ expected type parameter `S`, found integer | | | expected due to this | = note: expected type parameter `S` found type `{integer}` error: aborting due to previous error For more information about this error, try `rustc --explain E0308`. ", "commid": "rust_pr_71952"}], "negative_passages": []}
{"query_id": "q-en-rust-c06e64e4379da7896ed0629280583c22a3a67c292ea07c43958847cb5d464ea3", "query": "Given this Rust code: A call to generates IR like this on x86_64 Linux: So we have a 12 byte alloca from which we then read 16 bytes.\nIsn't this subject to alignment by LLVM?\nwhat do you mean? The alloca is only 12 bytes large. That's also the amount of bytes we write in the , but then we ask to read data for a type that is 16 bytes large. I don't see how alignment is relevant here.\nhere's a demo: Build without optimizations on x86_64 Linux, and you should be able to observe in the C code, although it hasn't been passed as an argument. The mismatch in the struct type definitions is deliberate to show the problem.\nReading past the end of an alloca has well-defined behavior in LLVM IR. I guess we might end up with false positives if we ever support msan.\nare you implying that the generated IR is fine? If so, what is the defined behaviour / where can I read up on that? reads as if it is undefined. Without optimizations, this gives random output: With optimizations, SROA ignores the out-of-bounds read and emits a warning in debug mode, which doesn't seem like it is well-defined behaviour to me either.\nBasically, if a pointer points to valid memory, and is sufficiently aligned, you can load bytes up to the alignment... LLVM itself does this at etc. That said, looking a bit more closely, the given testcase performs an \"align 8\" load from an \"align 4\" pointer, which is more problematic.\nThanks! That looks like it just assumes that the load won't trap. Part of my concern here is that we might expose additional data this way, like observing in that C code example. Though I guess optimizations might eventually leave the upper 32bits undefined and expose data anyway, so this might be moot. The alignment problem is probably due to bad usage of . Ultimately, we should load and pass the struct elements individually anyway.\nThis has been fixed, we now only load 12 bytes: instead of . Marking as E-needstest.", "positive_passages": [{"docid": "doc-en-rust-f652d675ec8f5fcfd73aa32554f64e24f3a2e41e5201ba93acb1cdb2977965e1", "text": " #![feature(type_ascription)] enum Bug { Var = 0: S, //~^ ERROR: mismatched types //~| ERROR: mismatched types } fn main() {} ", "commid": "rust_pr_71952"}], "negative_passages": []}
{"query_id": "q-en-rust-c06e64e4379da7896ed0629280583c22a3a67c292ea07c43958847cb5d464ea3", "query": "Given this Rust code: A call to generates IR like this on x86_64 Linux: So we have a 12 byte alloca from which we then read 16 bytes.\nIsn't this subject to alignment by LLVM?\nwhat do you mean? The alloca is only 12 bytes large. That's also the amount of bytes we write in the , but then we ask to read data for a type that is 16 bytes large. I don't see how alignment is relevant here.\nhere's a demo: Build without optimizations on x86_64 Linux, and you should be able to observe in the C code, although it hasn't been passed as an argument. The mismatch in the struct type definitions is deliberate to show the problem.\nReading past the end of an alloca has well-defined behavior in LLVM IR. I guess we might end up with false positives if we ever support msan.\nare you implying that the generated IR is fine? If so, what is the defined behaviour / where can I read up on that? reads as if it is undefined. Without optimizations, this gives random output: With optimizations, SROA ignores the out-of-bounds read and emits a warning in debug mode, which doesn't seem like it is well-defined behaviour to me either.\nBasically, if a pointer points to valid memory, and is sufficiently aligned, you can load bytes up to the alignment... LLVM itself does this at etc. That said, looking a bit more closely, the given testcase performs an \"align 8\" load from an \"align 4\" pointer, which is more problematic.\nThanks! That looks like it just assumes that the load won't trap. Part of my concern here is that we might expose additional data this way, like observing in that C code example. Though I guess optimizations might eventually leave the upper 32bits undefined and expose data anyway, so this might be moot. The alignment problem is probably due to bad usage of . Ultimately, we should load and pass the struct elements individually anyway.\nThis has been fixed, we now only load 12 bytes: instead of . Marking as E-needstest.", "positive_passages": [{"docid": "doc-en-rust-2c693447b59b202cb8bcd7918ac10104329e775c2027f37598f5cd44b5935973", "text": " error[E0308]: mismatched types --> $DIR/issue-67945-2.rs:4:11 | LL | enum Bug { | - this type parameter LL | Var = 0: S, | ^ expected type parameter `S`, found integer | = note: expected type parameter `S` found type `{integer}` error[E0308]: mismatched types --> $DIR/issue-67945-2.rs:4:11 | LL | enum Bug", "commid": "rust_pr_71952"}], "negative_passages": []}
{"query_id": "q-en-rust-c06e64e4379da7896ed0629280583c22a3a67c292ea07c43958847cb5d464ea3", "query": "Given this Rust code: A call to generates IR like this on x86_64 Linux: So we have a 12 byte alloca from which we then read 16 bytes.\nIsn't this subject to alignment by LLVM?\nwhat do you mean? The alloca is only 12 bytes large. That's also the amount of bytes we write in the , but then we ask to read data for a type that is 16 bytes large. I don't see how alignment is relevant here.\nhere's a demo: Build without optimizations on x86_64 Linux, and you should be able to observe in the C code, although it hasn't been passed as an argument. The mismatch in the struct type definitions is deliberate to show the problem.\nReading past the end of an alloca has well-defined behavior in LLVM IR. I guess we might end up with false positives if we ever support msan.\nare you implying that the generated IR is fine? If so, what is the defined behaviour / where can I read up on that? reads as if it is undefined. Without optimizations, this gives random output: With optimizations, SROA ignores the out-of-bounds read and emits a warning in debug mode, which doesn't seem like it is well-defined behaviour to me either.\nBasically, if a pointer points to valid memory, and is sufficiently aligned, you can load bytes up to the alignment... LLVM itself does this at etc. That said, looking a bit more closely, the given testcase performs an \"align 8\" load from an \"align 4\" pointer, which is more problematic.\nThanks! That looks like it just assumes that the load won't trap. Part of my concern here is that we might expose additional data this way, like observing in that C code example. Though I guess optimizations might eventually leave the upper 32bits undefined and expose data anyway, so this might be moot. The alignment problem is probably due to bad usage of . Ultimately, we should load and pass the struct elements individually anyway.\nThis has been fixed, we now only load 12 bytes: instead of . Marking as E-needstest.", "positive_passages": [{"docid": "doc-en-rust-bb986ae4763f2ed124451eb7b166e2d5f84f7a021234a61a9b2038d442dab406", "text": " trait Foo {} impl<'a, T> Foo for &'a T {} struct Ctx<'a>(&'a ()) where &'a (): Foo, //~ ERROR: type annotations needed &'static (): Foo; fn main() {} ", "commid": "rust_pr_71952"}], "negative_passages": []}
{"query_id": "q-en-rust-c06e64e4379da7896ed0629280583c22a3a67c292ea07c43958847cb5d464ea3", "query": "Given this Rust code: A call to generates IR like this on x86_64 Linux: So we have a 12 byte alloca from which we then read 16 bytes.\nIsn't this subject to alignment by LLVM?\nwhat do you mean? The alloca is only 12 bytes large. That's also the amount of bytes we write in the , but then we ask to read data for a type that is 16 bytes large. I don't see how alignment is relevant here.\nhere's a demo: Build without optimizations on x86_64 Linux, and you should be able to observe in the C code, although it hasn't been passed as an argument. The mismatch in the struct type definitions is deliberate to show the problem.\nReading past the end of an alloca has well-defined behavior in LLVM IR. I guess we might end up with false positives if we ever support msan.\nare you implying that the generated IR is fine? If so, what is the defined behaviour / where can I read up on that? reads as if it is undefined. Without optimizations, this gives random output: With optimizations, SROA ignores the out-of-bounds read and emits a warning in debug mode, which doesn't seem like it is well-defined behaviour to me either.\nBasically, if a pointer points to valid memory, and is sufficiently aligned, you can load bytes up to the alignment... LLVM itself does this at etc. That said, looking a bit more closely, the given testcase performs an \"align 8\" load from an \"align 4\" pointer, which is more problematic.\nThanks! That looks like it just assumes that the load won't trap. Part of my concern here is that we might expose additional data this way, like observing in that C code example. Though I guess optimizations might eventually leave the upper 32bits undefined and expose data anyway, so this might be moot. The alignment problem is probably due to bad usage of . Ultimately, we should load and pass the struct elements individually anyway.\nThis has been fixed, we now only load 12 bytes: instead of . Marking as E-needstest.", "positive_passages": [{"docid": "doc-en-rust-ce30cd927de832b35372c5bdbb7413ecfee46e8e5e0e7e00fe7f022990f3cfb4", "text": " error[E0283]: type annotations needed --> $DIR/issue-34979.rs:6:13 | LL | trait Foo {} | --------- required by this bound in `Foo` ... LL | &'a (): Foo, | ^^^ cannot infer type for reference `&'a ()` | = note: cannot satisfy `&'a (): Foo` error: aborting due to previous error For more information about this error, try `rustc --explain E0283`. ", "commid": "rust_pr_71952"}], "negative_passages": []}
{"query_id": "q-en-rust-a289d76f0e4e79e26fef39dbf074bc52ac4314129c2350795d7488bc7ed8c20e", "query": "It is currently impossible to use a std::io::Cursor over a Vec in a smart pointer (Mutex, RefCell, ...) in a sane way. Please implement Read/Write/... over Cursor<&'a mut Vec { | - this type parameter LL | Var = 0: S, | ^^^^ expected `isize`, found type parameter `S` | = note: expected type `isize` found type parameter `S` error: aborting due to 2 previous errors For more information about this error, try `rustc --explain E0308`. fn write(&mut self, data: &[u8]) -> io::Result fn write(&mut self, buf: &[u8]) -> io::Result let pos: usize = self.position().try_into().map_err(|_| { Error::new(ErrorKind::InvalidInput, \"cursor position exceeds maximum possible vector length\") })?; // Make sure the internal buffer is as least as big as where we // currently are let len = self.inner.len(); if len < pos { // use `resize` so that the zero filling is as efficient as possible self.inner.resize(pos, 0); } // Figure out what bytes will be used to overwrite what's currently // there (left), and what will be appended on the end (right) { let space = self.inner.len() - pos; let (left, right) = buf.split_at(cmp::min(space, buf.len())); self.inner[pos..pos + left.len()].copy_from_slice(left); self.inner.extend_from_slice(right); } // Bump us forward self.set_position((pos + buf.len()) as u64); Ok(buf.len()) vec_write(&mut self.pos, &mut self.inner, buf) } fn flush(&mut self) -> io::Result<()> { Ok(()) } }", "commid": "rust_pr_46830"}], "negative_passages": []}
{"query_id": "q-en-rust-a289d76f0e4e79e26fef39dbf074bc52ac4314129c2350795d7488bc7ed8c20e", "query": "It is currently impossible to use a std::io::Cursor over a Vec in a smart pointer (Mutex, RefCell, ...) in a sane way. Please implement Read/Write/... over Cursor<&'a mut Vec let pos = cmp::min(self.pos, self.inner.len() as u64); let amt = (&mut self.inner[(pos as usize)..]).write(buf)?; self.pos += amt as u64; Ok(amt) slice_write(&mut self.pos, &mut self.inner, buf) } fn flush(&mut self) -> io::Result<()> { Ok(()) } }", "commid": "rust_pr_46830"}], "negative_passages": []}
{"query_id": "q-en-rust-a289d76f0e4e79e26fef39dbf074bc52ac4314129c2350795d7488bc7ed8c20e", "query": "It is currently impossible to use a std::io::Cursor over a Vec in a smart pointer (Mutex, RefCell, ...) in a sane way. Please implement Read/Write/... over Cursor<&'a mut Vec automatically add a `main()` wrapper around your code, and in the right place. For example: automatically add a `main()` wrapper around your code, using heuristics to attempt to put it in the right place. For example: ```rust /// ```", "commid": "rust_pr_30153"}], "negative_passages": []}
{"query_id": "q-en-rust-81010324f12c35cfacd1e252e9c003ab5a26a7cb052a679d3eeca2e6142eda06", "query": "When a doctest like this is written: /// /// running the tests will produce an error like: since the test code is wrapped inside a for execution. This is imho not clear in this algorithm description\nThis seems like a bug somewhere in rustc/cargo, rather than a problem with the docs, because rust /// works fine, but (as was mentioned here) rust /// does not. This seems like wrong behaviour. Also, the suggested rust /// does not work either, I don't know if there is an RFC or issue open about attempting to resolve the names suggested under ... and only showing them if they exist. EDIT: as mentioned below, this is not a bug with rustc or cargo, it's just that there is no way to refer to the current function scope in a use statement if you import a crate into it.\nit's not wrong but it is very misleading. The key is that if does not appear in the doc test, then rustdoc encloses the entire test in a main function. This, coupled with the fact that does not really work inside a function, produces the bad situation in which we find ourselves.\nRight, it's the 'entire' part that's confusing. I myself find it to be so and I wrote the sentence!\nIt would be nice if rustdoc used similar heuristics to playbot, where stuff like extern crates and crate attributes get hoisted out of the generated main function.\nWhy is extern crate even allowed inside of functions? That seems like a horrible misfeature.\nIt's similar to anything else that brings a name into scope, you can do it in whatever scope you'd like.\nI guess the \"real\" problem is that has no way to refer to a function/expression scope. doesn't work, making the \"did you mean\" note less than helpful.", "positive_passages": [{"docid": "doc-en-rust-d6839759505fb3c5f6f84fa7073980019ebf094e7de9fdb99a97595c28048b2e", "text": "`unused_attributes`, and `dead_code`. Small examples often trigger these lints. 3. If the example does not contain `extern crate`, then `extern crate `main()` as well. Finally, a judicious use of `#` to comment out those two things, so they don\u2019t show up in the output. `main()` as well (for reasons discussed above). Finally, a judicious use of `#` to comment out those two things, so they don\u2019t show up in the output. Another case where the use of `#` is handy is when you want to ignore error handling. Lets say you want the following,", "commid": "rust_pr_30153"}], "negative_passages": []}
{"query_id": "q-en-rust-7835a5cc299544380275eb1e9368af3a50a80176472ef27d66eb0c6872f787a1", "query": "As of , we have enabled the LiveIRVariables LLVM pass which is a part of ongoing GC work for . Unfortunately, something in test/bench/task-perf-word- makes the LiveIRVariables pass unhappy. It seems that some LLVM optimization pass is producing an irreducible control flow graph, and LLVM's LoopSimplify pass isn't sophisticated enough to undo the damage. As a temporary workaround, xfails the two tests which break the liveness pass. But we should figure out what LLVM pass is causing the trouble and try to resolve that and un-xfail those tests.\nFixed in , and workaround reverted in .", "positive_passages": [{"docid": "doc-en-rust-f204fc99f2aa6379497b74ec81e4ca3d611d662dd977591485e5369e5d2cd5b0", "text": "// the merge has confused the heck out of josh in the past. // We pass `--no-verify` to avoid running git hooks like `./miri fmt` that could in turn // trigger auto-actions. sh.write_file(\"rust-version\", &commit)?; sh.write_file(\"rust-version\", format!(\"{commit}n\"))?; const PREPARING_COMMIT_MESSAGE: &str = \"Preparing for merge from rustc\"; cmd!(sh, \"git commit rust-version --no-verify -m {PREPARING_COMMIT_MESSAGE}\") .run()", "commid": "rust_pr_114735"}], "negative_passages": []}
{"query_id": "q-en-rust-7835a5cc299544380275eb1e9368af3a50a80176472ef27d66eb0c6872f787a1", "query": "As of , we have enabled the LiveIRVariables LLVM pass which is a part of ongoing GC work for . Unfortunately, something in test/bench/task-perf-word- makes the LiveIRVariables pass unhappy. It seems that some LLVM optimization pass is producing an irreducible control flow graph, and LLVM's LoopSimplify pass isn't sophisticated enough to undo the damage. As a temporary workaround, xfails the two tests which break the liveness pass. But we should figure out what LLVM pass is causing the trouble and try to resolve that and un-xfail those tests.\nFixed in , and workaround reverted in .", "positive_passages": [{"docid": "doc-en-rust-13547713196d3df676561f599d894c1cf65ff098e8d9f9e78b076c44caa880dd", "text": "// interleaving, but wether UB happens can depend on whether a write occurs in the // future... let is_write = new_perm.initial_state.is_active() || (new_perm.initial_state.is_resrved() && new_perm.protector.is_some()); || (new_perm.initial_state.is_reserved() && new_perm.protector.is_some()); if is_write { // Need to get mutable access to alloc_extra. // (Cannot always do this as we can do read-only reborrowing on read-only allocations.)", "commid": "rust_pr_114735"}], "negative_passages": []}
{"query_id": "q-en-rust-7835a5cc299544380275eb1e9368af3a50a80176472ef27d66eb0c6872f787a1", "query": "As of , we have enabled the LiveIRVariables LLVM pass which is a part of ongoing GC work for . Unfortunately, something in test/bench/task-perf-word- makes the LiveIRVariables pass unhappy. It seems that some LLVM optimization pass is producing an irreducible control flow graph, and LLVM's LoopSimplify pass isn't sophisticated enough to undo the damage. As a temporary workaround, xfails the two tests which break the liveness pass. But we should figure out what LLVM pass is causing the trouble and try to resolve that and un-xfail those tests.\nFixed in , and workaround reverted in .", "positive_passages": [{"docid": "doc-en-rust-3ae5c75c7e70e6f94fb2a14a00738d579ed341579feed3e3595ce2fe234510bd", "text": "matches!(self.inner, Active) } pub fn is_resrved(self) -> bool { pub fn is_reserved(self) -> bool { matches!(self.inner, Reserved { .. }) }", "commid": "rust_pr_114735"}], "negative_passages": []}
{"query_id": "q-en-rust-7835a5cc299544380275eb1e9368af3a50a80176472ef27d66eb0c6872f787a1", "query": "As of , we have enabled the LiveIRVariables LLVM pass which is a part of ongoing GC work for . Unfortunately, something in test/bench/task-perf-word- makes the LiveIRVariables pass unhappy. It seems that some LLVM optimization pass is producing an irreducible control flow graph, and LLVM's LoopSimplify pass isn't sophisticated enough to undo the damage. As a temporary workaround, xfails the two tests which break the liveness pass. But we should figure out what LLVM pass is causing the trouble and try to resolve that and un-xfail those tests.\nFixed in , and workaround reverted in .", "positive_passages": [{"docid": "doc-en-rust-198fc16bb90497cfb332e19864f84c3ae04617279a7858caad47d20bc7526e5e", "text": "/// in an existing allocation, then returns Err containing the position /// where such allocation should be inserted fn find_offset(&self, offset: Size) -> Result // We do a binary search. let mut left = 0usize; // inclusive let mut right = self.v.len(); // exclusive loop { if left == right { // No element contains the given offset. But the // position is where such element should be placed at. return Err(left); } let candidate = left.checked_add(right).unwrap() / 2; let elem = &self.v[candidate]; self.v.binary_search_by(|elem| -> std::cmp::Ordering { if offset < elem.range.start { // We are too far right (offset is further left). debug_assert!(candidate < right); // we are making progress right = candidate; std::cmp::Ordering::Greater } else if offset >= elem.range.end() { // We are too far left (offset is further right). debug_assert!(candidate >= left); // we are making progress left = candidate + 1; std::cmp::Ordering::Less } else { // This is it! return Ok(candidate); std::cmp::Ordering::Equal } } }) } /// Determines whether a given access on `range` overlaps with", "commid": "rust_pr_114735"}], "negative_passages": []}
{"query_id": "q-en-rust-edf1e264a5c853d2fa7a2c0afc3db045448734e8ef7cb1e8b1d201c8a3bc9174", "query": "As a newbie, I made the mistake that can be witnessed with the field in . The way this happened is that I vaguely remembered that you can't use when specifying the fields of and to make sure, I looked up the struct chapter in the book. Sure enough, it said \"Mutability is a property of the binding, not of the structure itself.\" Now, as a newbie, I thought this forbade in field definitions altogether even though it only forbids to the left of the colon and you can still have as the type to the right of the colon. It would be nice if this was briefly clarified for the benefit of newbies looking stuff up piecemeal and not properly thinking about stuff in the context of what has previously been said in an earlier chapter.\n:+1:", "positive_passages": [{"docid": "doc-en-rust-761f50ba59787d1f702bc334dab2bdb5bd092cc344eac4320b858e0b7738412b", "text": "} ``` Your structure can still contain `&mut` pointers, which will let you do some kinds of mutation: ```rust struct Point { x: i32, y: i32, } struct PointRef<'a> { x: &'a mut i32, y: &'a mut i32, } fn main() { let mut point = Point { x: 0, y: 0 }; { let r = PointRef { x: &mut point.x, y: &mut point.y }; *r.x = 5; *r.y = 6; } assert_eq!(5, point.x); assert_eq!(6, point.y); } ``` # Update syntax A `struct` can include `..` to indicate that you want to use a copy of some", "commid": "rust_pr_30699"}], "negative_passages": []}
{"query_id": "q-en-rust-bfbc512e1a388512ed369460691dd47244d7efe2ff4e317e7da309e9b2454163", "query": "This could've been written with written as and similarly for /. It was written this way only to make explicit the necessity of one blanket impl for satisfying the other. I'm guessing that extensional equality is not judged, even though it is possible to make such a judgment and such a judgment must be made to compile code that compiles today. on line 12 ought to be equated to (which must be detected at some other point in time else line 13 wouldn't be able to compile). EDIT: Playing around a bit, it looks more and more like it's all about the HKL bounds...\nHRTBs (I've been using the wrong nomenclature the whole time!) make projections sad. Very likely a dupe of .\ncc\nKnown issue.\n(or Still a bit unclear on y'all's role separation) What's the expected overall approach to fixing this issue? Even if you don't expect a newcomer to handle it, I'd like to know anyway. :-D\nactually, I'm not entirely sure. There are a serious of refactorings that I have in mind for the type system / trait resolver, and I've been figuring that after those are done, I would come back to revisit some of these issues, if they still occur, but it may be worth digging into this example (or ) in detail. Honestly it's not all fresh in my mind. The refactorings I am thinking of are, first, lazy normalization (as described ), which may indeed help here, though I've not thought it through very deeply. Second, I'd like to generally rework how the trait checker is working to make it easier to extend the environment as you go -- the current setup, where the set of assumptions etc is semi-fixed when the inference context is created -- makes it hard to \"descend through\" a binder like and then do trait resolutions and normalizations within that context.\nSo, potentially w.r.t. lazy normalization, I've been playing with another bit of code (trying to shrink it a bit): From reading the debug logs, it seems that when checking the well-formed-ness of , upon normalizing to , and registering and ing on the implied , the compiler has no idea that and never normalizes to it either. Is this a situation in which the compiler might have assumed that everything was already normalized the best it could be and thus gave up? An aside: does lazy normalization have something to do with ?\nIs this It looks similar, but uncertain.\nWhat's the state of this now? I fairly frequently run into problems with associated types involved with HRTBs not projecting/unifying.\nTriage: last two comments asking for clarification, no replies. Given comments earlier in the thread, I imagine this issue is \"post-chalk\".\nThis is one of the things that the traits working group is kind of actively working towards. It's blocked to some extent on the universes work, see e.g. which tries to unblock that effort, plus lazy norm. I think we'll be making active progress though over the next few months.\nTriage: According to , this should get tagged .\nI think this issue is related to an error I encountered when trying to use the trait example . The trait is defined as follow: I implemented it for a vector of booleans: I believe this should compile, but it errors. let parent_link = ModuleParentLink(parent, name); let def = Def::TyAlias(self.ast_map.local_def_id(item.id)); let module = self.new_module(parent_link, Some(def), false, is_public); self.define(parent, name, TypeNS, (module, sp)); self.define(parent, name, TypeNS, (def, sp, modifiers)); parent }", "commid": "rust_pr_32134"}], "negative_passages": []}
{"query_id": "q-en-rust-982d85e2418997eec91ec11587e1a75c5019587713ac915a830e36ac21b292b9", "query": "Code that compiles fine with rustc is causing an internal compiler error: unexpected panic with rustdoc. I tried this code (): The second line is essentially a typo, which can happen if used to be an Enum for instance. Again, the following works fine (although I'm not sure why, can you even add anything to the namespace?): However, if I run: I get the error: This actually happened on a version of a package already pushed to (hdf5-sys 0.3.0), so anything that depends on the erroneous version and runs will see this error. Run on . Backtrace:\nThis is easy to fix in rustdoc, but should this even compile?\nWhoa kinda crazy... cc (resolve weridness, longstanding though!)", "positive_passages": [{"docid": "doc-en-rust-3d6ba1bf6cc5d650c1eb634cf0785a6a7d882ec5f01bb87e2f7d382c17d0ab5a", "text": "} match def { Def::Mod(_) | Def::ForeignMod(_) | Def::Enum(..) | Def::TyAlias(..) => { Def::Mod(_) | Def::ForeignMod(_) | Def::Enum(..) => { debug!(\"(building reduced graph for external crate) building module {} {}\", final_ident, is_public);", "commid": "rust_pr_32134"}], "negative_passages": []}
{"query_id": "q-en-rust-982d85e2418997eec91ec11587e1a75c5019587713ac915a830e36ac21b292b9", "query": "Code that compiles fine with rustc is causing an internal compiler error: unexpected panic with rustdoc. I tried this code (): The second line is essentially a typo, which can happen if used to be an Enum for instance. Again, the following works fine (although I'm not sure why, can you even add anything to the namespace?): However, if I run: I get the error: This actually happened on a version of a package already pushed to (hdf5-sys 0.3.0), so anything that depends on the erroneous version and runs will see this error. Run on . Backtrace:\nThis is easy to fix in rustdoc, but should this even compile?\nWhoa kinda crazy... cc (resolve weridness, longstanding though!)", "positive_passages": [{"docid": "doc-en-rust-e259e0eaf5d12d162cc288e89f7512658c20e42fb8a8b698c0e131ca455e015c", "text": "let module = self.new_module(parent_link, Some(def), true, is_public); self.try_define(new_parent, name, TypeNS, (module, DUMMY_SP)); } Def::AssociatedTy(..) => { Def::TyAlias(..) | Def::AssociatedTy(..) => { debug!(\"(building reduced graph for external crate) building type {}\", final_ident); self.try_define(new_parent, name, TypeNS, (def, DUMMY_SP, modifiers));", "commid": "rust_pr_32134"}], "negative_passages": []}
{"query_id": "q-en-rust-982d85e2418997eec91ec11587e1a75c5019587713ac915a830e36ac21b292b9", "query": "Code that compiles fine with rustc is causing an internal compiler error: unexpected panic with rustdoc. I tried this code (): The second line is essentially a typo, which can happen if used to be an Enum for instance. Again, the following works fine (although I'm not sure why, can you even add anything to the namespace?): However, if I run: I get the error: This actually happened on a version of a package already pushed to (hdf5-sys 0.3.0), so anything that depends on the erroneous version and runs will see this error. Run on . Backtrace:\nThis is easy to fix in rustdoc, but should this even compile?\nWhoa kinda crazy... cc (resolve weridness, longstanding though!)", "positive_passages": [{"docid": "doc-en-rust-39f798ae9849d5de4bfd079de52bcfe276f623f3d5122345f2effe36c70c4e24", "text": "target_module: Module<'b>, directive: &'b ImportDirective) -> ResolveResult<()> { if let Some(Def::Trait(_)) = target_module.def { self.resolver.session.span_err(directive.span, \"items in traits are not importable.\"); } if module_.def_id() == target_module.def_id() { // This means we are trying to glob import a module into itself, and it is a no-go let msg = \"Cannot glob-import a module into itself.\".into();", "commid": "rust_pr_32134"}], "negative_passages": []}
{"query_id": "q-en-rust-982d85e2418997eec91ec11587e1a75c5019587713ac915a830e36ac21b292b9", "query": "Code that compiles fine with rustc is causing an internal compiler error: unexpected panic with rustdoc. I tried this code (): The second line is essentially a typo, which can happen if used to be an Enum for instance. Again, the following works fine (although I'm not sure why, can you even add anything to the namespace?): However, if I run: I get the error: This actually happened on a version of a package already pushed to (hdf5-sys 0.3.0), so anything that depends on the erroneous version and runs will see this error. Run on . Backtrace:\nThis is easy to fix in rustdoc, but should this even compile?\nWhoa kinda crazy... cc (resolve weridness, longstanding though!)", "positive_passages": [{"docid": "doc-en-rust-8df80afe3cb494eccf9554fbb990ed8da5e8d86187ae8cfe81e23738f4d698c4", "text": " // Copyright 2016 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0 Please verify you didn't misspell the import name or the import does exist in the module from where you tried to import it. Example: Paths in `use` statements are relative to the crate root. To import items relative to the current and parent modules, use the `self::` and `super::` prefixes, respectively. Also verify that you didn't misspell the import name and that the import exists in the module from where you tried to import it. Example: ```ignore use something::Foo; // ok! use self::something::Foo; // ok! mod something { pub struct Foo;", "commid": "rust_pr_33320"}], "negative_passages": []}
{"query_id": "q-en-rust-0b0f6730fce7dffa2d0be2400db454ca790389aee6a7c42d8bd0b6bbe896a6e1", "query": "Exemple code: Error is: The compiler should detect that \"self\" is missing. Note this is an error I do all the time as a beginner.\nEven without doing an analysis of what could have likely been intended this error message could mention that paths in are absolute by default, unless they start with or .\nRan into this a few days ago, would love to the messaging here improved :+1:\nThe linked PR adds the information to the long diagnostics, at least. Note that if you just have an then the error message is markedly different: I wonder if some consolidation would be in order.\nFixed in .", "positive_passages": [{"docid": "doc-en-rust-409daa07628c5eab5a3d998d58512d2835a8c3c4c010b272486f4204209365b6", "text": "``` Or, if you tried to use a module from an external crate, you may have missed the `extern crate` declaration: the `extern crate` declaration (which is usually placed in the crate root): ```ignore extern crate homura; // Required to use the `homura` crate", "commid": "rust_pr_33320"}], "negative_passages": []}
{"query_id": "q-en-rust-cbb83b0c5c6c3ca5f35bdae7e5f1890b43d36a4e87d6df41d4a8b50fc1aab6db", "query": "rustdoc source links end in as can be seen here at the moment (the [src] link) Source link is: (Which is a dead link) Also reproduced locally using rustc 1.9.0-nightly ( 2016-03-07)", "positive_passages": [{"docid": "doc-en-rust-a90caf70b1a527aca7bdfec9be38c26d62c24fbfa85afcd7f016789d00437e50", "text": "// has anchors for the line numbers that we're linking to. } else if self.item.def_id.is_local() { self.cx.local_sources.get(&PathBuf::from(&self.item.source.filename)).map(|path| { format!(\"{root}src/{krate}/{path}.html#{href}\", format!(\"{root}src/{krate}/{path}#{href}\", root = self.cx.root_path, krate = self.cx.layout.krate, path = path,", "commid": "rust_pr_32117"}], "negative_passages": []}
{"query_id": "q-en-rust-adc16be03be73de6e0a1b3d713cfe38f6f55b14b6e38aa02975befa3d0479aaf", "query": "Unfortunately, I can't reproduce this at a smaller scale. This issue occurs with as of commit . Rust version is Currently the tests (run from the directory with ) pass. However, applying the following patch: causes the following compiler error: which is claiming that an import conflicts with itself. I have pushed up the branch with that change applied for easy reproduction. Again, I'm sorry that I cannot provide a report on a smaller scale, but this appears to be some edge case blowing up. I have no clue why adding a module with a few imports and nothing else would be causing this issue.\nBisecting to find the nightly which caused this\nI can confirm that this issue does not occur on . Unfortunately, there was a bug that prevented us from compiling on that wasn't fixed until , so I cannot provide anything more specific than that.\ncc\nThanks for the report! This bug was introduced in . I diagnosed the issue and will fix it ASAP.", "positive_passages": [{"docid": "doc-en-rust-0eb3a2ef6ae2ba2ceb051567d2e6d862d5fd173e3be2207f3db8d0c9ee7aebe0", "text": "} } fn increment_outstanding_references(&mut self, is_public: bool) { self.outstanding_references += 1; if is_public { self.pub_outstanding_references += 1; } } fn decrement_outstanding_references(&mut self, is_public: bool) { let decrement_references = |count: &mut _| { assert!(*count > 0); *count -= 1; }; decrement_references(&mut self.outstanding_references); if is_public { decrement_references(&mut self.pub_outstanding_references); } } fn report_conflicts let mut resolutions = self.resolutions.borrow_mut(); let resolution = resolutions.entry((name, ns)).or_insert_with(Default::default); resolution.outstanding_references += 1; if is_public { resolution.pub_outstanding_references += 1; } } fn decrement_outstanding_references_for(&self, name: Name, ns: Namespace, is_public: bool) { let decrement_references = |count: &mut _| { assert!(*count > 0); *count -= 1; }; self.update_resolution(name, ns, |resolution| { decrement_references(&mut resolution.outstanding_references); if is_public { decrement_references(&mut resolution.pub_outstanding_references); } }) self.resolutions.borrow_mut().entry((name, ns)).or_insert_with(Default::default) .increment_outstanding_references(is_public); } // Use `update` to mutate the resolution for the name.", "commid": "rust_pr_32227"}], "negative_passages": []}
{"query_id": "q-en-rust-adc16be03be73de6e0a1b3d713cfe38f6f55b14b6e38aa02975befa3d0479aaf", "query": "Unfortunately, I can't reproduce this at a smaller scale. This issue occurs with as of commit . Rust version is Currently the tests (run from the directory with ) pass. However, applying the following patch: causes the following compiler error: which is claiming that an import conflicts with itself. I have pushed up the branch with that change applied for easy reproduction. Again, I'm sorry that I cannot provide a report on a smaller scale, but this appears to be some edge case blowing up. I have no clue why adding a module with a few imports and nothing else would be causing this issue.\nBisecting to find the nightly which caused this\nI can confirm that this issue does not occur on . Unfortunately, there was a bug that prevented us from compiling on that wasn't fixed until , so I cannot provide anything more specific than that.\ncc\nThanks for the report! This bug was introduced in . I diagnosed the issue and will fix it ASAP.", "positive_passages": [{"docid": "doc-en-rust-efe07827c2a8a00221a654d844a57f0ec275463dd42e2a31555d5395223e30a1", "text": "// Temporarily count the directive as determined so that the resolution fails // (as opposed to being indeterminate) when it can only be defined by the directive. if !determined { module_.decrement_outstanding_references_for(target, ns, directive.is_public) module_.resolutions.borrow_mut().get_mut(&(target, ns)).unwrap() .decrement_outstanding_references(directive.is_public); } let result = self.resolver.resolve_name_in_module(target_module, source, ns, false, true);", "commid": "rust_pr_32227"}], "negative_passages": []}
{"query_id": "q-en-rust-adc16be03be73de6e0a1b3d713cfe38f6f55b14b6e38aa02975befa3d0479aaf", "query": "Unfortunately, I can't reproduce this at a smaller scale. This issue occurs with as of commit . Rust version is Currently the tests (run from the directory with ) pass. However, applying the following patch: causes the following compiler error: which is claiming that an import conflicts with itself. I have pushed up the branch with that change applied for easy reproduction. Again, I'm sorry that I cannot provide a report on a smaller scale, but this appears to be some edge case blowing up. I have no clue why adding a module with a few imports and nothing else would be causing this issue.\nBisecting to find the nightly which caused this\nI can confirm that this issue does not occur on . Unfortunately, there was a bug that prevented us from compiling on that wasn't fixed until , so I cannot provide anything more specific than that.\ncc\nThanks for the report! This bug was introduced in . I diagnosed the issue and will fix it ASAP.", "positive_passages": [{"docid": "doc-en-rust-91ab42eb7faf4f6e47a2ec66426ba38022c680b7c4740582c9f9b07797b46009", "text": "self.report_conflict(target, ns, &directive.import(binding, None), old_binding); } } module_.decrement_outstanding_references_for(target, ns, directive.is_public); module_.update_resolution(target, ns, |resolution| { resolution.decrement_outstanding_references(directive.is_public); }) } match (&value_result, &type_result) {", "commid": "rust_pr_32227"}], "negative_passages": []}
{"query_id": "q-en-rust-adc16be03be73de6e0a1b3d713cfe38f6f55b14b6e38aa02975befa3d0479aaf", "query": "Unfortunately, I can't reproduce this at a smaller scale. This issue occurs with as of commit . Rust version is Currently the tests (run from the directory with ) pass. However, applying the following patch: causes the following compiler error: which is claiming that an import conflicts with itself. I have pushed up the branch with that change applied for easy reproduction. Again, I'm sorry that I cannot provide a report on a smaller scale, but this appears to be some edge case blowing up. I have no clue why adding a module with a few imports and nothing else would be causing this issue.\nBisecting to find the nightly which caused this\nI can confirm that this issue does not occur on . Unfortunately, there was a bug that prevented us from compiling on that wasn't fixed until , so I cannot provide anything more specific than that.\ncc\nThanks for the report! This bug was introduced in . I diagnosed the issue and will fix it ASAP.", "positive_passages": [{"docid": "doc-en-rust-95a5222989c78f3ff8f6367c46cd3a6b4d12e78e84148080a658042419da77b2", "text": " // Copyright 2016 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0 * This function fails if `c` is not a valid char * This function returns none if `c` is not a valid char * * # Return value *", "commid": "rust_pr_3251"}], "negative_passages": []}
{"query_id": "q-en-rust-1da86149387dd4a1e3c80739e6a091dbc8d7b2c4791f06905f5e96bb8823b756", "query": "talks about what it uses to query time on the various platforms, but that is an implementation detail. The API documentation should define minimum precision guarantees for these APIs, so the user can make an informed decision whether using these APIs will provide sufficient precision for their use case on all supported platforms. For example, if a user wanted to use these APIs to limit framerate in a video game, second precision wouldn't cut it, but millisecond precision should be sufficient for most cases, so if these APIs guarantee at least millisecond precision, then they can be used for this purpose.\nShould we guarantee precision or document which system APIs these things use?\nI definitely agree this should be documented, at least for major platforms. If it's not documented, all you can do is test the precision, but then you have to make assumptions about future systems. If we guarantee a precision, that leaves open the question of what to do for a system that can't support such a precision. Panic?\nWe already can't guarantee any particular precision. Linux for example runs on processors with a wide variety of timing hardware, with all sorts of different precisions.\nPerhaps we need another API to query the precision.\nIf it's not possible to provide any guarantees, we should at least document the precision for major (tier-1) platforms, and explicitly state that there are no guarantees for other platforms.\nDoes \"tier-1 platform\" include guest Linux on VMWare? Its clock is known to be freaky. POSIX has which return the (finest) unit of time supported by the system, but they don't guarantee measurements are that accurate. When OSs don't guarantee anything, what can Rust do? Perhaps we can just add a wrapper of these query APIs if that is convenient to somebody.\nHrrm. If truly nothing else can be done, then at least let's document the platform APIs we use for at least the major platforms, so the user doesn't have to look them up in the RFCs. That's at least more information about the precision than nothing at all.\nDo we aim for at least some level of precision where possible? // Needs to go *after* expansion to be able to check the results // of macro expansion. This runs before #[cfg] to try to catch as // much as possible (e.g. help the programmer avoid platform // specific differences) time(time_passes, \"complete gated feature checking 1\", || { sess.track_errors(|| { let features = syntax::feature_gate::check_crate(sess.codemap(), &sess.parse_sess.span_diagnostic, &krate, &attributes, sess.opts.unstable_features); *sess.features.borrow_mut() = features; }) })?; // JBC: make CFG processing part of expansion to avoid this problem: // strip again, in case expansion added anything with a #[cfg].", "commid": "rust_pr_32846"}], "negative_passages": []}
{"query_id": "q-en-rust-a3e8c34e864973a8bfb1792cd3bd2f96489995c8593c9b9f71bcfc3164e1cf95", "query": "Macro-expanded unconfigured items are gated feature checked, but ordinary unconfigured items are not. For example, the following should compile, ... If we manually expand , it always compiles:\nIt looks like this was (introduced in ), but I don't think it's a good idea. cc cc\nThere are arguments on about why this is not a good idea: is typically used by crates that need to build against both stable and nightly and use nightly-specific features when available (the most important example that I have in mind is syntax extensions using plugins on nightly and syntex on stable). Checking for use of gated features inside this -guarded code will break those crates, and it's not a good thing, this use case is actually useful. This check pass should be removed.\ncc", "positive_passages": [{"docid": "doc-en-rust-c843c3bd41f3df79540d8bfc8b83aab0c94cd1a4de42d62524fc47e9740b6b76", "text": "\"checking for inline asm in case the target doesn't support it\", || no_asm::check_crate(sess, &krate)); // One final feature gating of the true AST that gets compiled // later, to make sure we've got everything (e.g. configuration // can insert new attributes via `cfg_attr`) time(time_passes, \"complete gated feature checking 2\", || { // Needs to go *after* expansion to be able to check the results of macro expansion. time(time_passes, \"complete gated feature checking\", || { sess.track_errors(|| { let features = syntax::feature_gate::check_crate(sess.codemap(), &sess.parse_sess.span_diagnostic,", "commid": "rust_pr_32846"}], "negative_passages": []}
{"query_id": "q-en-rust-a3e8c34e864973a8bfb1792cd3bd2f96489995c8593c9b9f71bcfc3164e1cf95", "query": "Macro-expanded unconfigured items are gated feature checked, but ordinary unconfigured items are not. For example, the following should compile, ... If we manually expand , it always compiles:\nIt looks like this was (introduced in ), but I don't think it's a good idea. cc cc\nThere are arguments on about why this is not a good idea: is typically used by crates that need to build against both stable and nightly and use nightly-specific features when available (the most important example that I have in mind is syntax extensions using plugins on nightly and syntex on stable). Checking for use of gated features inside this -guarded code will break those crates, and it's not a good thing, this use case is actually useful. This check pass should be removed.\ncc", "positive_passages": [{"docid": "doc-en-rust-4d783f03143fcfd8f54fe201aebdd2da979c4ec9d47c13616620d7f6f6efdbf9", "text": "// When we enter a module, record it, for the sake of `module!` pub fn expand_item(it: P let it = expand_item_multi_modifier(Annotatable::Item(it), fld); expand_annotatable(it, fld) expand_annotatable(Annotatable::Item(it), fld) .into_iter().map(|i| i.expect_item()).collect() }", "commid": "rust_pr_32846"}], "negative_passages": []}
{"query_id": "q-en-rust-a3e8c34e864973a8bfb1792cd3bd2f96489995c8593c9b9f71bcfc3164e1cf95", "query": "Macro-expanded unconfigured items are gated feature checked, but ordinary unconfigured items are not. For example, the following should compile, ... If we manually expand , it always compiles:\nIt looks like this was (introduced in ), but I don't think it's a good idea. cc cc\nThere are arguments on about why this is not a good idea: is typically used by crates that need to build against both stable and nightly and use nightly-specific features when available (the most important example that I have in mind is syntax extensions using plugins on nightly and syntex on stable). Checking for use of gated features inside this -guarded code will break those crates, and it's not a good thing, this use case is actually useful. This check pass should be removed.\ncc", "positive_passages": [{"docid": "doc-en-rust-8a1e84c9676fe716d00cfbeec35a44fc5a97ea42529bdc49e3ece5f9be4e7fcd", "text": " // Copyright 2016 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0 ptr::copy(ptr, new_ptr, cmp::min(size, old_size)); deallocate(ptr, old_size, align); if !new_ptr.is_null() { ptr::copy(ptr, new_ptr, cmp::min(size, old_size)); deallocate(ptr, old_size, align); } new_ptr } }", "commid": "rust_pr_32997"}], "negative_passages": []}
{"query_id": "q-en-rust-658d1fdf442652abfc80f22e8a4fb5bd036ed3231b41d09d848e4078368f8a7d", "query": "Before (on librustc): After (on same librustc): cc\nThe extra RAM usage seems to be because of the AST not getting freed after the early lint checks. Not sure if this is intentional.\nThe AST should definitely still get freed, nobody touched , so it's not intentional (unless something about the command line options has changed?)\nThe performance regression in looks like it was caused by . I'll work on a fix now. I'm not sure what caused the RAM increase.\nResolution used to take about 1 seconds a few months ago in winapi and now it takes 9 seconds.\nI fixed the performance regression in .", "positive_passages": [{"docid": "doc-en-rust-319465190e7d3737ab23f558710d6f58a83ceba5bcf35666661b4653bc84ec5a", "text": "glob_importers: RefCell let mut search_in_module = |module: Module<'a>| module.for_each_child(|_, ns, binding| { if ns != TypeNS { return } let trait_def_id = match binding.def() { Some(Def::Trait(trait_def_id)) => trait_def_id, Some(..) | None => return, }; if self.trait_item_map.contains_key(&(name, trait_def_id)) { add_trait_info(&mut found_traits, trait_def_id, name); let trait_name = self.get_trait_name(trait_def_id); self.record_use(trait_name, TypeNS, binding); let mut search_in_module = |module: Module<'a>| { let mut traits = module.traits.borrow_mut(); if traits.is_none() { let mut collected_traits = Vec::new(); module.for_each_child(|_, ns, binding| { if ns != TypeNS { return } if let Some(Def::Trait(_)) = binding.def() { collected_traits.push(binding); } }); *traits = Some(collected_traits.into_boxed_slice()); } }); for binding in traits.as_ref().unwrap().iter() { let trait_def_id = binding.def().unwrap().def_id(); if self.trait_item_map.contains_key(&(name, trait_def_id)) { add_trait_info(&mut found_traits, trait_def_id, name); let trait_name = self.get_trait_name(trait_def_id); self.record_use(trait_name, TypeNS, binding); } } }; search_in_module(search_module); match search_module.parent_link {", "commid": "rust_pr_33064"}], "negative_passages": []}
{"query_id": "q-en-rust-d627101c1998a69a9acfc4fc050d526fd9490d359fe680c8705f7fea7fa42de8", "query": "Should it expect a ? Full example code:\nIt should expect , but it's lost from the expected set. I't not an issue specific for , but a symptom of a more general problem with expected token sets. There's a long weekend ahead, I'll investigate.\nOh wait, this works on nightly, as expected (fixed in , more specifically). It means there are no problems with expected token sets.", "positive_passages": [{"docid": "doc-en-rust-98800dbc71b2c8918d6accba945d4cc965d3490141f5b662d51521d4e482a07a", "text": " // Copyright 2016 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0 if not os.path.exists(self.rustc_stamp()): if not os.path.exists(self.rustc_stamp()) or self.clean: return True with open(self.rustc_stamp(), 'r') as f: return self.stage0_rustc_date() != f.read() def cargo_out_of_date(self): if not os.path.exists(self.cargo_stamp()): if not os.path.exists(self.cargo_stamp()) or self.clean: return True with open(self.cargo_stamp(), 'r') as f: return self.stage0_cargo_date() != f.read()", "commid": "rust_pr_33991"}], "negative_passages": []}
{"query_id": "q-en-rust-e8b63fbd3eef42ff24fbcc14cfedd13b401fb28c9bb6269f0f29605f837a2515", "query": "Errors like are happening on the bots, e.g.: This is under the covers, but rustbuild's command should blow away the build system directory ahead of time to be more resilient to bugs like this.", "positive_passages": [{"docid": "doc-en-rust-a7525ed96805665467b62d3f796c2ec735b3cf490f69c383cbf3cf683e4e4a9a", "text": "return '' def build_bootstrap(self): build_dir = os.path.join(self.build_dir, \"bootstrap\") if self.clean and os.path.exists(build_dir): shutil.rmtree(build_dir) env = os.environ.copy() env[\"CARGO_TARGET_DIR\"] = os.path.join(self.build_dir, \"bootstrap\") env[\"CARGO_TARGET_DIR\"] = build_dir env[\"RUSTC\"] = self.rustc() env[\"LD_LIBRARY_PATH\"] = os.path.join(self.bin_root(), \"lib\") env[\"DYLD_LIBRARY_PATH\"] = os.path.join(self.bin_root(), \"lib\")", "commid": "rust_pr_33991"}], "negative_passages": []}
{"query_id": "q-en-rust-e8b63fbd3eef42ff24fbcc14cfedd13b401fb28c9bb6269f0f29605f837a2515", "query": "Errors like are happening on the bots, e.g.: This is under the covers, but rustbuild's command should blow away the build system directory ahead of time to be more resilient to bugs like this.", "positive_passages": [{"docid": "doc-en-rust-ad850e0c783007976584f6829b38980a717032bb71173a9aa28a3d5c8160348f", "text": "def main(): parser = argparse.ArgumentParser(description='Build rust') parser.add_argument('--config') parser.add_argument('--clean', action='store_true') parser.add_argument('-v', '--verbose', action='store_true') args = [a for a in sys.argv if a != '-h']", "commid": "rust_pr_33991"}], "negative_passages": []}
{"query_id": "q-en-rust-e8b63fbd3eef42ff24fbcc14cfedd13b401fb28c9bb6269f0f29605f837a2515", "query": "Errors like are happening on the bots, e.g.: This is under the covers, but rustbuild's command should blow away the build system directory ahead of time to be more resilient to bugs like this.", "positive_passages": [{"docid": "doc-en-rust-cc13989c36eb40f8b69377233f00b80e31c7c7c87f425459847e052fd5ad1ac6", "text": "rb.rust_root = os.path.abspath(os.path.join(__file__, '../../..')) rb.build_dir = os.path.join(os.getcwd(), \"build\") rb.verbose = args.verbose rb.clean = args.clean try: with open(args.config or 'config.toml') as config:", "commid": "rust_pr_33991"}], "negative_passages": []}
{"query_id": "q-en-rust-b278a33a45c1e97c10100b3dad4ec24c2c8bb8eb8922d851b0ab693416a57ae4", "query": "The rust book states that asm! clobbers should be written as \"{rdx}\" but when you use them as such the clobbers will be silently ignored making for some nasty bugs. output Relevant part of objdump: Relevant LLVM IR: As we can seen in the emitted LLVM IR, the given clobbers are surrounded by {{}}. I'm not sure what effect this has, but it seems like LLVM just silently ignores them. A fix would either be to not send the extra {}'s to LLVM or to edit the documentation to reflect that clobbers should be given without surrounding {}'s. Either way, this should probably not generate invalid code silently.", "positive_passages": [{"docid": "doc-en-rust-b1d54e385ea164094b8e04d30d72890eb838c01d0f373a06329a51f593ed0804", "text": "asm!(\"xor %eax, %eax\" : : : \"{eax}\" : \"eax\" ); # } } ```", "commid": "rust_pr_34682"}], "negative_passages": []}
{"query_id": "q-en-rust-b278a33a45c1e97c10100b3dad4ec24c2c8bb8eb8922d851b0ab693416a57ae4", "query": "The rust book states that asm! clobbers should be written as \"{rdx}\" but when you use them as such the clobbers will be silently ignored making for some nasty bugs. output Relevant part of objdump: Relevant LLVM IR: As we can seen in the emitted LLVM IR, the given clobbers are surrounded by {{}}. I'm not sure what effect this has, but it seems like LLVM just silently ignores them. A fix would either be to not send the extra {}'s to LLVM or to edit the documentation to reflect that clobbers should be given without surrounding {}'s. Either way, this should probably not generate invalid code silently.", "positive_passages": [{"docid": "doc-en-rust-727a0780e24d618962e673d38bc11f2f3c142b8ca1bb78d060ba280109f2716f", "text": "# #![feature(asm)] # #[cfg(any(target_arch = \"x86\", target_arch = \"x86_64\"))] # fn main() { unsafe { asm!(\"xor %eax, %eax\" ::: \"{eax}\"); asm!(\"xor %eax, %eax\" ::: \"eax\"); # } } ```", "commid": "rust_pr_34682"}], "negative_passages": []}
{"query_id": "q-en-rust-b278a33a45c1e97c10100b3dad4ec24c2c8bb8eb8922d851b0ab693416a57ae4", "query": "The rust book states that asm! clobbers should be written as \"{rdx}\" but when you use them as such the clobbers will be silently ignored making for some nasty bugs. output Relevant part of objdump: Relevant LLVM IR: As we can seen in the emitted LLVM IR, the given clobbers are surrounded by {{}}. I'm not sure what effect this has, but it seems like LLVM just silently ignores them. A fix would either be to not send the extra {}'s to LLVM or to edit the documentation to reflect that clobbers should be given without surrounding {}'s. Either way, this should probably not generate invalid code silently.", "positive_passages": [{"docid": "doc-en-rust-4d6e98f9e04985cfee3b87b307b5c34c306ebaf9c622f8e116984f6f2277e30f", "text": "# #[cfg(any(target_arch = \"x86\", target_arch = \"x86_64\"))] # fn main() { unsafe { // Put the value 0x200 in eax asm!(\"mov $$0x200, %eax\" : /* no outputs */ : /* no inputs */ : \"{eax}\"); asm!(\"mov $$0x200, %eax\" : /* no outputs */ : /* no inputs */ : \"eax\"); # } } ```", "commid": "rust_pr_34682"}], "negative_passages": []}
{"query_id": "q-en-rust-b278a33a45c1e97c10100b3dad4ec24c2c8bb8eb8922d851b0ab693416a57ae4", "query": "The rust book states that asm! clobbers should be written as \"{rdx}\" but when you use them as such the clobbers will be silently ignored making for some nasty bugs. output Relevant part of objdump: Relevant LLVM IR: As we can seen in the emitted LLVM IR, the given clobbers are surrounded by {{}}. I'm not sure what effect this has, but it seems like LLVM just silently ignores them. A fix would either be to not send the extra {}'s to LLVM or to edit the documentation to reflect that clobbers should be given without surrounding {}'s. Either way, this should probably not generate invalid code silently.", "positive_passages": [{"docid": "doc-en-rust-1d604524d84e25804c5f1f9600ed0b1d966dc265c6de68a3638e1a15c8d96597", "text": "if OPTIONS.iter().any(|&opt| s == opt) { cx.span_warn(p.last_span, \"expected a clobber, found an option\"); } else if s.starts_with(\"{\") || s.ends_with(\"}\") { cx.span_err(p.last_span, \"clobber should not be surrounded by braces\"); } clobs.push(s); } }", "commid": "rust_pr_34682"}], "negative_passages": []}
{"query_id": "q-en-rust-b278a33a45c1e97c10100b3dad4ec24c2c8bb8eb8922d851b0ab693416a57ae4", "query": "The rust book states that asm! clobbers should be written as \"{rdx}\" but when you use them as such the clobbers will be silently ignored making for some nasty bugs. output Relevant part of objdump: Relevant LLVM IR: As we can seen in the emitted LLVM IR, the given clobbers are surrounded by {{}}. I'm not sure what effect this has, but it seems like LLVM just silently ignores them. A fix would either be to not send the extra {}'s to LLVM or to edit the documentation to reflect that clobbers should be given without surrounding {}'s. Either way, this should probably not generate invalid code silently.", "positive_passages": [{"docid": "doc-en-rust-9f65aaeb7a5953252377a300b771c2afe7af0a5cbcd410ce5968f23d729cabfb", "text": " // Copyright 2016 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0 use rustc_middle::ty::{self, Binder, DefIdTree, IsSuggestable, Ty}; use rustc_middle::ty::{ self, suggest_constraining_type_params, Binder, DefIdTree, IsSuggestable, ToPredicate, Ty, }; use rustc_session::errors::ExprParenthesesNeeded; use rustc_span::symbol::sym; use rustc_span::Span;", "commid": "rust_pr_105679"}], "negative_passages": []}
{"query_id": "q-en-rust-4fd363e3c2caaa69ef789bf01f16038e09b968a59dfd250a6cc32eb3d0bfb237", "query": "This gives the very unintuitive error: You expect to return , i.e. a . It mysteriously returns an instead. Because has a clone impl, it too, can be cloned (really, copied), returning a reference of the same lifetime. This is an operation you rarely want to do explicitly, except when dealing with generics. This error can crop up when you've forgotten to implement on , and have a reference (which could have been silently inserted using , as in the code above). However, it's not very obvious what's happening here. It can also crop up We should hint that itself isn't cloneable and thus we fell back to cloning here. This could be a lint on hitting the clone impl for references, but lints run too late. I'm not sure how this can be implemented, since we need to see the source of a value and that's a bit tricky.\ncc\nInteresting. This does seem like it would require some pretty special-case code to achieve, but I agree it'd be a nice case to capture.\nA relatively easy way to catch most instances of this is to Pick up the expression which has a type error like this If it is a method call, check if that method call is a clone() call resolving to If it is a variable, find the variable's def If the def is a local def, look at the init expr, check if it's a bad clone This will miss things, but I think the majority of such bugs will be due to this. As it stands this feels like an incredibly easy case to hit (we've all forgotten to decorate structs).\ncc\nCC\nAs reported in : This reports:\nCurrent output is even worse for the original report: The last comment's code is marginally different:\nCurrent output: The last case should suggest constraining . Edit: that was easier than I thought (we already had the suggestion machinery):
&& let trait_ref = ty::Binder::dummy(self.tcx.mk_trait_ref(clone_trait_did, [expected_ty])) // And the expected type doesn't implement `Clone` && !self.predicate_must_hold_considering_regions(&traits::Obligation::new( self.tcx, traits::ObligationCause::dummy(), self.param_env,
ty::Binder::dummy(self.tcx.mk_trait_ref( clone_trait_did, [expected_ty], )), trait_ref, )) { diag.span_note(", "commid": "rust_pr_105679"}], "negative_passages": []}
{"query_id": "q-en-rust-4fd363e3c2caaa69ef789bf01f16038e09b968a59dfd250a6cc32eb3d0bfb237", "query": "This gives the very unintuitive error: You expect to return , i.e. a . It mysteriously returns an instead. Because has a clone impl, it too, can be cloned (really, copied), returning a reference of the same lifetime. This is an operation you rarely want to do explicitly, except when dealing with generics. This error can crop up when you've forgotten to implement on , and have a reference (which could have been silently inserted using , as in the code above). However, it's not very obvious what's happening here. It can also crop up We should hint that itself isn't cloneable and thus we fell back to cloning here. This could be a lint on hitting the clone impl for references, but lints run too late. I'm not sure how this can be implemented, since we need to see the source of a value and that's a bit tricky.\ncc\nInteresting. This does seem like it would require some pretty special-case code to achieve, but I agree it'd be a nice case to capture.\nA relatively easy way to catch most instances of this is to Pick up the expression which has a type error like this If it is a method call, check if that method call is a clone() call resolving to If it is a variable, find the variable's def If the def is a local def, look at the init expr, check if it's a bad clone This will miss things, but I think the majority of such bugs will be due to this. As it stands this feels like an incredibly easy case to hit (we've all forgotten to decorate structs).\ncc\nCC\nAs reported in : This reports:\nCurrent output is even worse for the original report: The last comment's code is marginally different:\nCurrent output: The last case should suggest constraining . Edit: that was easier than I thought (we already had the suggestion machinery): let owner = self.tcx.hir().enclosing_body_owner(expr.hir_id); if let ty::Param(param) = expected_ty.kind() && let Some(generics) = self.tcx.hir().get_generics(owner) { suggest_constraining_type_params( self.tcx, generics, diag, vec![(param.name.as_str(), \"Clone\", Some(clone_trait_did))].into_iter(), ); } else { self.suggest_derive(diag, &[(trait_ref.to_predicate(self.tcx), None, None)]); } } }", "commid": "rust_pr_105679"}], "negative_passages": []}
{"query_id": "q-en-rust-4fd363e3c2caaa69ef789bf01f16038e09b968a59dfd250a6cc32eb3d0bfb237", "query": "This gives the very unintuitive error: You expect to return , i.e. a . It mysteriously returns an instead. Because has a clone impl, it too, can be cloned (really, copied), returning a reference of the same lifetime. This is an operation you rarely want to do explicitly, except when dealing with generics. This error can crop up when you've forgotten to implement on , and have a reference (which could have been silently inserted using , as in the code above). However, it's not very obvious what's happening here. It can also crop up We should hint that itself isn't cloneable and thus we fell back to cloning here. This could be a lint on hitting the clone impl for references, but lints run too late. I'm not sure how this can be implemented, since we need to see the source of a value and that's a bit tricky.\ncc\nInteresting. This does seem like it would require some pretty special-case code to achieve, but I agree it'd be a nice case to capture.\nA relatively easy way to catch most instances of this is to Pick up the expression which has a type error like this If it is a method call, check if that method call is a clone() call resolving to If it is a variable, find the variable's def If the def is a local def, look at the init expr, check if it's a bad clone This will miss things, but I think the majority of such bugs will be due to this. As it stands this feels like an incredibly easy case to hit (we've all forgotten to decorate structs).\ncc\nCC\nAs reported in : This reports:\nCurrent output is even worse for the original report: The last comment's code is marginally different:\nCurrent output: The last case should suggest constraining . Edit: that was easier than I thought (we already had the suggestion machinery):
fn suggest_derive( pub fn suggest_derive( &self, err: &mut Diagnostic, unsatisfied_predicates: &[(", "commid": "rust_pr_105679"}], "negative_passages": []}
{"query_id": "q-en-rust-4fd363e3c2caaa69ef789bf01f16038e09b968a59dfd250a6cc32eb3d0bfb237", "query": "This gives the very unintuitive error: You expect to return , i.e. a . It mysteriously returns an instead. Because has a clone impl, it too, can be cloned (really, copied), returning a reference of the same lifetime. This is an operation you rarely want to do explicitly, except when dealing with generics. This error can crop up when you've forgotten to implement on , and have a reference (which could have been silently inserted using , as in the code above). However, it's not very obvious what's happening here. It can also crop up We should hint that itself isn't cloneable and thus we fell back to cloning here. This could be a lint on hitting the clone impl for references, but lints run too late. I'm not sure how this can be implemented, since we need to see the source of a value and that's a bit tricky.\ncc\nInteresting. This does seem like it would require some pretty special-case code to achieve, but I agree it'd be a nice case to capture.\nA relatively easy way to catch most instances of this is to Pick up the expression which has a type error like this If it is a method call, check if that method call is a clone() call resolving to If it is a variable, find the variable's def If the def is a local def, look at the init expr, check if it's a bad clone This will miss things, but I think the majority of such bugs will be due to this. As it stands this feels like an incredibly easy case to hit (we've all forgotten to decorate structs).\ncc\nCC\nAs reported in : This reports:\nCurrent output is even worse for the original report: The last comment's code is marginally different:\nCurrent output: The last case should suggest constraining . Edit: that was easier than I thought (we already had the suggestion machinery):
// run-rustfix fn wat
// run-rustfix fn wat
error[E0308]: mismatched types --> $DIR/clone-on-unconstrained-borrowed-type-param.rs:3:5 | LL | fn wat
help: consider annotating `NotClone` with `#[derive(Clone)]` | LL | #[derive(Clone)] | error: aborting due to previous error", "commid": "rust_pr_105679"}], "negative_passages": []}
{"query_id": "q-en-rust-afd99a09558cb6a3848844ef436e6494e1cbb1dbf88a969fa3f44e23fe09260f", "query": "Was just trying to read through std::fmt to understand how format values to a particular decimal point. I came across this set of examples, which don't do a good job of explaining the output of each, so it's difficult to visually pattern match what I type in to what comes out:\nSorry for the mess :sweat_smile: I guess it is clearer now.", "positive_passages": [{"docid": "doc-en-rust-b0a4fc36f2d2163a9b0cccaf614f538b3a8e828825e08a1606290e5ab2eb7200", "text": "//! in this case, if one uses the format string `{
//! For example, these: //! For example, the following calls all print the same thing `Hello x is 0.01000`: //! //! ``` //! // Hello {arg 0 (x)} is {arg 1 (0.01) with precision specified inline (5)} //! // Hello {arg 0 (\"x\")} is {arg 1 (0.01) with precision specified inline (5)} //! println!(\"Hello {0} is {1:.5}\", \"x\", 0.01); //! //! // Hello {arg 1 (x)} is {arg 2 (0.01) with precision specified in arg 0 (5)} //! // Hello {arg 1 (\"x\")} is {arg 2 (0.01) with precision specified in arg 0 (5)} //! println!(\"Hello {1} is {2:.0$}\", 5, \"x\", 0.01); //! //! // Hello {arg 0 (x)} is {arg 2 (0.01) with precision specified in arg 1 (5)} //! // Hello {arg 0 (\"x\")} is {arg 2 (0.01) with precision specified in arg 1 (5)} //! println!(\"Hello {0} is {2:.1$}\", \"x\", 5, 0.01); //! //! // Hello {next arg (x)} is {second of next two args (0.01) with precision //! // Hello {next arg (\"x\")} is {second of next two args (0.01) with precision //! // specified in first of next two args (5)} //! println!(\"Hello {} is {:.*}\", \"x\", 5, 0.01); //! //! // Hello {next arg (x)} is {arg 2 (0.01) with precision //! // Hello {next arg (\"x\")} is {arg 2 (0.01) with precision //! // specified in its predecessor (5)} //! println!(\"Hello {} is {2:.*}\", \"x\", 5, 0.01); //! //! // Hello {next arg (x)} is {arg \"number\" (0.01) with precision specified //! // Hello {next arg (\"x\")} is {arg \"number\" (0.01) with precision specified //! // in arg \"prec\" (5)} //! println!(\"Hello {} is {number:.prec$}\", \"x\", prec = 5, number = 0.01); //! ``` //! //! All print the same thing: //! //! ```text //! Hello x is 0.01000 //! ``` //! //! While these: //! //! ```", "commid": "rust_pr_35050"}], "negative_passages": []}
{"query_id": "q-en-rust-0d1264126124504c28deff0fdd1f9f10ff7b0971b2cfc371ed677d6f5abcce94", "query": "It's been unsafe since 2014 when it was called . Nobody thought about it, I guess, but it shouldn't be unsafe.\nFor reference the PR which it is\nSomeone thought about it enough to : Has something changed to make this consideration unnecessary or do you have an explanation of why the reasoning is faulty?\nBecause the function takes so by definition it must be the owner of the value so there can be no other viewers, either from this thread or another thread.\nAlthough this isn't backwards-compatible there is some prior art of going unsafe -safe: (although, that was pre-1.0)\nThat's weird, it seems like you ought to be able to assign a safe fn to an unsafe fn pointer. , Nicole Mazzuca wrote:\nWe removed the subtyping relation at some point; it does seem like a coercion would be (probably) ok, but that would still not be (strictly) backwards compatible.\n(I tend to agree that could be safe, though, and I highly doubt any existing crates would actually break due to this change.)\nWe could always do a crater run just to be absolutely sure nobody is turning into a function pointer.\nHI ! Any news on this one ?\nNo change that I know of.\nFWIW the relevant coercion was in\nThis change is breaking my crate. If I add the then it will work for older compilers. If I remove the then it will work for newer compilers. Is there a conditional compilation attribute for compiler version? I can't find one.\nUse\nYeah, that's what I'm doing for now. It seems kind of lame, though. But it does work.", "positive_passages": [{"docid": "doc-en-rust-16da9f2bf0a84a71964433a08b9ebd21b8bc6609c84744a2dd2d7684c7e44c34", "text": "/// ``` #[stable(feature = \"move_cell\", since = \"1.17.0\")] pub fn into_inner(self) -> T { unsafe { self.value.into_inner() } self.value.into_inner() } }", "commid": "rust_pr_47204"}], "negative_passages": []}
{"query_id": "q-en-rust-0d1264126124504c28deff0fdd1f9f10ff7b0971b2cfc371ed677d6f5abcce94", "query": "It's been unsafe since 2014 when it was called . Nobody thought about it, I guess, but it shouldn't be unsafe.\nFor reference the PR which it is\nSomeone thought about it enough to : Has something changed to make this consideration unnecessary or do you have an explanation of why the reasoning is faulty?\nBecause the function takes so by definition it must be the owner of the value so there can be no other viewers, either from this thread or another thread.\nAlthough this isn't backwards-compatible there is some prior art of going unsafe -safe: (although, that was pre-1.0)\nThat's weird, it seems like you ought to be able to assign a safe fn to an unsafe fn pointer. , Nicole Mazzuca wrote:\nWe removed the subtyping relation at some point; it does seem like a coercion would be (probably) ok, but that would still not be (strictly) backwards compatible.\n(I tend to agree that could be safe, though, and I highly doubt any existing crates would actually break due to this change.)\nWe could always do a crater run just to be absolutely sure nobody is turning into a function pointer.\nHI ! Any news on this one ?\nNo change that I know of.\nFWIW the relevant coercion was in\nThis change is breaking my crate. If I add the then it will work for older compilers. If I remove the then it will work for newer compilers. Is there a conditional compilation attribute for compiler version? I can't find one.\nUse\nYeah, that's what I'm doing for now. It seems kind of lame, though. But it does work.", "positive_passages": [{"docid": "doc-en-rust-c6c890d736bf918b58a99677d7ce4c57e82bba6dea0ed6ff4f5fd462f825975b", "text": "// compiler statically verifies that it is not currently borrowed. // Therefore the following assertion is just a `debug_assert!`. debug_assert!(self.borrow.get() == UNUSED); unsafe { self.value.into_inner() } self.value.into_inner() } /// Replaces the wrapped value with a new one, returning the old value,", "commid": "rust_pr_47204"}], "negative_passages": []}
{"query_id": "q-en-rust-0d1264126124504c28deff0fdd1f9f10ff7b0971b2cfc371ed677d6f5abcce94", "query": "It's been unsafe since 2014 when it was called . Nobody thought about it, I guess, but it shouldn't be unsafe.\nFor reference the PR which it is\nSomeone thought about it enough to : Has something changed to make this consideration unnecessary or do you have an explanation of why the reasoning is faulty?\nBecause the function takes so by definition it must be the owner of the value so there can be no other viewers, either from this thread or another thread.\nAlthough this isn't backwards-compatible there is some prior art of going unsafe -safe: (although, that was pre-1.0)\nThat's weird, it seems like you ought to be able to assign a safe fn to an unsafe fn pointer. , Nicole Mazzuca wrote:\nWe removed the subtyping relation at some point; it does seem like a coercion would be (probably) ok, but that would still not be (strictly) backwards compatible.\n(I tend to agree that could be safe, though, and I highly doubt any existing crates would actually break due to this change.)\nWe could always do a crater run just to be absolutely sure nobody is turning into a function pointer.\nHI ! Any news on this one ?\nNo change that I know of.\nFWIW the relevant coercion was in\nThis change is breaking my crate. If I add the then it will work for older compilers. If I remove the then it will work for newer compilers. Is there a conditional compilation attribute for compiler version? I can't find one.\nUse\nYeah, that's what I'm doing for now. It seems kind of lame, though. But it does work.", "positive_passages": [{"docid": "doc-en-rust-64b3f025f7044d1ddff8acb6b0ffe7dd386a6390fbf860aafe0566b7308134d7", "text": "/// Unwraps the value. /// /// # Safety /// /// This function is unsafe because this thread or another thread may currently be /// inspecting the inner value. /// /// # Examples /// /// ```", "commid": "rust_pr_47204"}], "negative_passages": []}
{"query_id": "q-en-rust-0d1264126124504c28deff0fdd1f9f10ff7b0971b2cfc371ed677d6f5abcce94", "query": "It's been unsafe since 2014 when it was called . Nobody thought about it, I guess, but it shouldn't be unsafe.\nFor reference the PR which it is\nSomeone thought about it enough to : Has something changed to make this consideration unnecessary or do you have an explanation of why the reasoning is faulty?\nBecause the function takes so by definition it must be the owner of the value so there can be no other viewers, either from this thread or another thread.\nAlthough this isn't backwards-compatible there is some prior art of going unsafe -safe: (although, that was pre-1.0)\nThat's weird, it seems like you ought to be able to assign a safe fn to an unsafe fn pointer. , Nicole Mazzuca wrote:\nWe removed the subtyping relation at some point; it does seem like a coercion would be (probably) ok, but that would still not be (strictly) backwards compatible.\n(I tend to agree that could be safe, though, and I highly doubt any existing crates would actually break due to this change.)\nWe could always do a crater run just to be absolutely sure nobody is turning into a function pointer.\nHI ! Any news on this one ?\nNo change that I know of.\nFWIW the relevant coercion was in\nThis change is breaking my crate. If I add the then it will work for older compilers. If I remove the then it will work for newer compilers. Is there a conditional compilation attribute for compiler version? I can't find one.\nUse\nYeah, that's what I'm doing for now. It seems kind of lame, though. But it does work.", "positive_passages": [{"docid": "doc-en-rust-a1af952078c4e2b21fc389d658b8f11b36a5bc00d83a63e47d6787cc9bace151", "text": "/// /// let uc = UnsafeCell::new(5); /// /// let five = unsafe { uc.into_inner() }; /// let five = uc.into_inner(); /// ``` #[inline] #[stable(feature = \"rust1\", since = \"1.0.0\")] pub unsafe fn into_inner(self) -> T { pub fn into_inner(self) -> T { self.value } }", "commid": "rust_pr_47204"}], "negative_passages": []}
{"query_id": "q-en-rust-0d1264126124504c28deff0fdd1f9f10ff7b0971b2cfc371ed677d6f5abcce94", "query": "It's been unsafe since 2014 when it was called . Nobody thought about it, I guess, but it shouldn't be unsafe.\nFor reference the PR which it is\nSomeone thought about it enough to : Has something changed to make this consideration unnecessary or do you have an explanation of why the reasoning is faulty?\nBecause the function takes so by definition it must be the owner of the value so there can be no other viewers, either from this thread or another thread.\nAlthough this isn't backwards-compatible there is some prior art of going unsafe -safe: (although, that was pre-1.0)\nThat's weird, it seems like you ought to be able to assign a safe fn to an unsafe fn pointer. , Nicole Mazzuca wrote:\nWe removed the subtyping relation at some point; it does seem like a coercion would be (probably) ok, but that would still not be (strictly) backwards compatible.\n(I tend to agree that could be safe, though, and I highly doubt any existing crates would actually break due to this change.)\nWe could always do a crater run just to be absolutely sure nobody is turning into a function pointer.\nHI ! Any news on this one ?\nNo change that I know of.\nFWIW the relevant coercion was in\nThis change is breaking my crate. If I add the then it will work for older compilers. If I remove the then it will work for newer compilers. Is there a conditional compilation attribute for compiler version? I can't find one.\nUse\nYeah, that's what I'm doing for now. It seems kind of lame, though. But it does work.", "positive_passages": [{"docid": "doc-en-rust-406d0fe7110cd57dd6e1e7a5c1ed7494da6b617a3616ca32e75f792f638aa7dc", "text": "#[inline] #[stable(feature = \"atomic_access\", since = \"1.15.0\")] pub fn into_inner(self) -> bool { unsafe { self.v.into_inner() != 0 } self.v.into_inner() != 0 } /// Loads a value from the bool.", "commid": "rust_pr_47204"}], "negative_passages": []}
{"query_id": "q-en-rust-0d1264126124504c28deff0fdd1f9f10ff7b0971b2cfc371ed677d6f5abcce94", "query": "It's been unsafe since 2014 when it was called . Nobody thought about it, I guess, but it shouldn't be unsafe.\nFor reference the PR which it is\nSomeone thought about it enough to : Has something changed to make this consideration unnecessary or do you have an explanation of why the reasoning is faulty?\nBecause the function takes so by definition it must be the owner of the value so there can be no other viewers, either from this thread or another thread.\nAlthough this isn't backwards-compatible there is some prior art of going unsafe -safe: (although, that was pre-1.0)\nThat's weird, it seems like you ought to be able to assign a safe fn to an unsafe fn pointer. , Nicole Mazzuca wrote:\nWe removed the subtyping relation at some point; it does seem like a coercion would be (probably) ok, but that would still not be (strictly) backwards compatible.\n(I tend to agree that could be safe, though, and I highly doubt any existing crates would actually break due to this change.)\nWe could always do a crater run just to be absolutely sure nobody is turning into a function pointer.\nHI ! Any news on this one ?\nNo change that I know of.\nFWIW the relevant coercion was in\nThis change is breaking my crate. If I add the then it will work for older compilers. If I remove the then it will work for newer compilers. Is there a conditional compilation attribute for compiler version? I can't find one.\nUse\nYeah, that's what I'm doing for now. It seems kind of lame, though. But it does work.", "positive_passages": [{"docid": "doc-en-rust-0eb161898bda2bdeaddd72afbb0217e5fada8d47dd64bd741d6c407e6a5b3c12", "text": "#[inline] #[stable(feature = \"atomic_access\", since = \"1.15.0\")] pub fn into_inner(self) -> *mut T { unsafe { self.p.into_inner() } self.p.into_inner() } /// Loads a value from the pointer.", "commid": "rust_pr_47204"}], "negative_passages": []}
{"query_id": "q-en-rust-0d1264126124504c28deff0fdd1f9f10ff7b0971b2cfc371ed677d6f5abcce94", "query": "It's been unsafe since 2014 when it was called . Nobody thought about it, I guess, but it shouldn't be unsafe.\nFor reference the PR which it is\nSomeone thought about it enough to : Has something changed to make this consideration unnecessary or do you have an explanation of why the reasoning is faulty?\nBecause the function takes so by definition it must be the owner of the value so there can be no other viewers, either from this thread or another thread.\nAlthough this isn't backwards-compatible there is some prior art of going unsafe -safe: (although, that was pre-1.0)\nThat's weird, it seems like you ought to be able to assign a safe fn to an unsafe fn pointer. , Nicole Mazzuca wrote:\nWe removed the subtyping relation at some point; it does seem like a coercion would be (probably) ok, but that would still not be (strictly) backwards compatible.\n(I tend to agree that could be safe, though, and I highly doubt any existing crates would actually break due to this change.)\nWe could always do a crater run just to be absolutely sure nobody is turning into a function pointer.\nHI ! Any news on this one ?\nNo change that I know of.\nFWIW the relevant coercion was in\nThis change is breaking my crate. If I add the then it will work for older compilers. If I remove the then it will work for newer compilers. Is there a conditional compilation attribute for compiler version? I can't find one.\nUse\nYeah, that's what I'm doing for now. It seems kind of lame, though. But it does work.", "positive_passages": [{"docid": "doc-en-rust-e7b470a8ba30c5c4d612a5cfe7f8c8c7ee16ba380f9e27de9c47ea975e2747fd", "text": "#[inline] #[$stable_access] pub fn into_inner(self) -> $int_type { unsafe { self.v.into_inner() } self.v.into_inner() } /// Loads a value from the atomic integer.", "commid": "rust_pr_47204"}], "negative_passages": []}
{"query_id": "q-en-rust-2248a5835c777256553e41bdc9de1a961cdda64c90ad7ae16cedfbc4a4a55a52", "query": "From: src/test/compile- Error E0165 needs a span_label, updating it from: To:", "positive_passages": [{"docid": "doc-en-rust-4c63558cc9d3a7d49de2099dde6fed65293235d399582099d4ea5d910ad53337", "text": "let &(ref first_arm_pats, _) = &arms[0]; let first_pat = &first_arm_pats[0]; let span = first_pat.span; span_err!(cx.tcx.sess, span, E0165, \"irrefutable while-let pattern\"); struct_span_err!(cx.tcx.sess, span, E0165, \"irrefutable while-let pattern\") .span_label(span, &format!(\"irrefutable pattern\")) .emit(); }, hir::MatchSource::ForLoopDesugar => {", "commid": "rust_pr_36125"}], "negative_passages": []}
{"query_id": "q-en-rust-2248a5835c777256553e41bdc9de1a961cdda64c90ad7ae16cedfbc4a4a55a52", "query": "From: src/test/compile- Error E0165 needs a span_label, updating it from: To:", "positive_passages": [{"docid": "doc-en-rust-8b08f0d27415a104fa48b934f0765a0355d43ca6fc67afd713dc36a5a957c834", "text": "tcx.sess.add_lint(lint::builtin::MATCH_OF_UNIT_VARIANT_VIA_PAREN_DOTDOT, pat.id, pat.span, msg); } else { span_err!(tcx.sess, pat.span, E0164, \"{}\", msg); struct_span_err!(tcx.sess, pat.span, E0164, \"{}\", msg) .span_label(pat.span, &format!(\"not a tuple variant or struct\")).emit(); on_error(); } };", "commid": "rust_pr_36125"}], "negative_passages": []}
{"query_id": "q-en-rust-2248a5835c777256553e41bdc9de1a961cdda64c90ad7ae16cedfbc4a4a55a52", "query": "From: src/test/compile- Error E0165 needs a span_label, updating it from: To:", "positive_passages": [{"docid": "doc-en-rust-ff8270d0215ddbada1cc300e82fad9e762a3ece3b4bbee04e4bcb0dce691dc6b", "text": ".emit(); } Err(CopyImplementationError::HasDestructor) => { span_err!(tcx.sess, span, E0184, struct_span_err!(tcx.sess, span, E0184, \"the trait `Copy` may not be implemented for this type; the type has a destructor\"); the type has a destructor\") .span_label(span, &format!(\"Copy not allowed on types with destructors\")) .emit(); } } });", "commid": "rust_pr_36125"}], "negative_passages": []}
{"query_id": "q-en-rust-2248a5835c777256553e41bdc9de1a961cdda64c90ad7ae16cedfbc4a4a55a52", "query": "From: src/test/compile- Error E0165 needs a span_label, updating it from: To:", "positive_passages": [{"docid": "doc-en-rust-89b81bc2ced1e642ad37d83220c291f95b6679f99542f755cb9e538c06b880f2", "text": "fn bar(foo: Foo) -> u32 { match foo { Foo::B(i) => i, //~ ERROR E0164 //~| NOTE not a tuple variant or struct } }", "commid": "rust_pr_36125"}], "negative_passages": []}
{"query_id": "q-en-rust-2248a5835c777256553e41bdc9de1a961cdda64c90ad7ae16cedfbc4a4a55a52", "query": "From: src/test/compile- Error E0165 needs a span_label, updating it from: To:", "positive_passages": [{"docid": "doc-en-rust-5ac9244c3a6c15fbdae0e13a10f6fd45a8109f654a92720dd619ed7e268a3979", "text": "fn main() { let irr = Irrefutable(0); while let Irrefutable(x) = irr { //~ ERROR E0165 //~| irrefutable pattern // ... } }", "commid": "rust_pr_36125"}], "negative_passages": []}
{"query_id": "q-en-rust-2248a5835c777256553e41bdc9de1a961cdda64c90ad7ae16cedfbc4a4a55a52", "query": "From: src/test/compile- Error E0165 needs a span_label, updating it from: To:", "positive_passages": [{"docid": "doc-en-rust-2c8aa37c87b67ffc2fb3043a235cfa0e3690b6a6bb14675adbdd13e85870289f", "text": "// except according to those terms. #[derive(Copy)] //~ ERROR E0184 //~| NOTE Copy not allowed on types with destructors //~| NOTE in this expansion of #[derive(Copy)] struct Foo; impl Drop for Foo {", "commid": "rust_pr_36125"}], "negative_passages": []}
{"query_id": "q-en-rust-84fe10bf056d6814d3eda74c00d09770737f23d8e448b3d0bf98856cb4433638", "query": "From: src/test/compile- E0263 needs a span_label, updating it from: To: Bonus: underline and label the previous declaration:\nI'm on it.", "positive_passages": [{"docid": "doc-en-rust-38a9cbb8d7784ac6e58b0f388834f185efbcbac0f357b13904c143e24fa94b01", "text": "let lifetime_j = &lifetimes[j]; if lifetime_i.lifetime.name == lifetime_j.lifetime.name { span_err!(self.sess, lifetime_j.lifetime.span, E0263, \"lifetime name `{}` declared twice in the same scope\", lifetime_j.lifetime.name); struct_span_err!(self.sess, lifetime_j.lifetime.span, E0263, \"lifetime name `{}` declared twice in the same scope\", lifetime_j.lifetime.name) .span_label(lifetime_j.lifetime.span, &format!(\"declared twice\")) .span_label(lifetime_i.lifetime.span, &format!(\"previous declaration here\")) .emit(); } }", "commid": "rust_pr_35557"}], "negative_passages": []}
{"query_id": "q-en-rust-84fe10bf056d6814d3eda74c00d09770737f23d8e448b3d0bf98856cb4433638", "query": "From: src/test/compile- E0263 needs a span_label, updating it from: To: Bonus: underline and label the previous declaration:\nI'm on it.", "positive_passages": [{"docid": "doc-en-rust-c092baab073064214101ca07b352e059950a1f86d7a817d05cdddd9420fd0404", "text": "// option. This file may not be copied, modified, or distributed // except according to those terms. fn foo<'a, 'b, 'a>(x: &'a str, y: &'b str) { } //~ ERROR E0263 fn foo<'a, 'b, 'a>(x: &'a str, y: &'b str) { //~^ ERROR E0263 //~| NOTE declared twice //~| NOTE previous declaration here } fn main() {}", "commid": "rust_pr_35557"}], "negative_passages": []}
{"query_id": "q-en-rust-65b77b7e06ebe16e3cdb656b584dc7bc041446f7fa3f3531ba29eb2e11ea7c50", "query": "A localized (e.g. German) MSVC can produce non-UTF-8 output, which can become almost unreadable in the way it's currently forwarded to the user. This can be seen in an (otherwise unrelated) issue comment: Note especially this line: That output might be a lot longer for multiple LNK errors (one line per error, but the lines are not properly separated in the output, because they are converted to ) and become really hard to read. If possible, the output should be converted to Unicode in this case. (previously reported as ) NOTE from Please install Visual Studio English Language Pack side by side with your favorite language pack for UI\nI\u2019m surprised we try to interpret any output in Windows as UTF-8 as opposed to UTF-16, like we should.\nI suspect that the MSVC compiler's raw output is encoded as Windows-1252 or Windows-1250 (for German) depending on the current console code page and not as UTF-16. Does solve the encoding problem?\nNo, neither in nor in (usually I'm working with the latter). It prints but the encoding problems persist.\nIs it just MSVC which is localized, or is your system codepage itself different? If we know that a certain program's output is the system codepage we could use with to convert it. Perhaps we could check whether the output is UTF-8, and attempt to do the codepage conversion if it isn't.\nI am using a German localized Windows. How can I find out the system codepage? The default CP for console windows is \"850 (OEM - Multilingual Lateinisch I)\".\nYou can call with or .\nThat returns codepage numbers 1252 for and 850 for .\nIn which case the output from the linker appears to be , so now someone just has to add code to rustc which detects when the linker output isn't utf-8 on windows and use with instead.\nI ran into this problem as well (also on a German Windows 10). As a workaround it helps if you go to in Win 10 to Settings -Region and Language -Language and add English as a language and make it default ( you may have to log out and back in again). After that, programs like should output using a locale that works with rust as it is right now. It would still be great if this could be fixed in rust :)\nJust saw another instance of this in , except with ) (a common encoding in China), where got printed instead of the desired .\nUsing nightly 2019-07-04, we can confirm that this issue is fixed. (I'm using Chinese version of Visual Studio).\nFor me, it still does not work even with nightly build (1.38 x86_64-pc-windows-msvc, Russian version of VS). I try to compile a Hello world project in Intellij Idea but always get the exception:\nCould you execute and see what that outputs?\nThat's a little strange. Could you open \"x64 Native Tools Command Prompt for Visual Studio 2019\" from start menu, and execute the following command: And see whether the output is English? If the output's still not in English, would you mind open \"Visual Studio Installer\" from start menu, choose \"Modify\" for the corresponding Visual Studio edition, and in the Language packs tab, install the English language pack. And try the instructions above again?\nthanks! The hint about installing the English language pack has made the previous error disappear. But now I have another message while running in Idea: I haven't changed the path to the std library, it's looks like\nYes, this is the same error as before, it's because you didn't install windows sdk. Choose the newest one in the visual studio installer too.\nthank you again. Now it's compiling and running. I'm happy ;) Just out of curiosity: was I supposed to know in advance that I would need an English lang pack and a Windows SDK when installing Visual Studio?\nPlease feel free to point out places where Rust tells you to install Visual Studio but does not tell you to install the Windows SDK, so that we can fix those places. It might still be worth implementing the text encoding version of the fix for people who don't have the English language pack, though I really wish we could just get unicode output out of VS without having to specify a language.\nsorry, seems I was a bit inattentive. Now I double-checked: after starting rustup- the screen info says, I should assure that I have Windows SDK installed. So my fault, nothing to fix there :) Thanks.\nThis is not a fix anymore.\nrefer to\nPlease make sure you have installed Visual Studio English Language Pack side by side with your favorite language pack for UI for this to work.\nI'm not sure this problem is really fixed. The PR that closed this kinda works around the issue because English is usually ASCII which is the same in UTF-8. However, as noted above, this relies on the Visual Studio English language pack being installed which it isn't always. And that there's no non-ascii output (e.g. file paths, etc). The suggestion to use with seems like a better fix, imho.", "positive_passages": [{"docid": "doc-en-rust-3e310c59d818cf186ba206df2ff602e523016faba098a47a19c1fc182368190f", "text": "\"tempfile\", \"thorin-dwp\", \"tracing\", \"windows 0.46.0\", ] [[package]]", "commid": "rust_pr_110586"}], "negative_passages": []}
{"query_id": "q-en-rust-65b77b7e06ebe16e3cdb656b584dc7bc041446f7fa3f3531ba29eb2e11ea7c50", "query": "A localized (e.g. German) MSVC can produce non-UTF-8 output, which can become almost unreadable in the way it's currently forwarded to the user. This can be seen in an (otherwise unrelated) issue comment: Note especially this line: That output might be a lot longer for multiple LNK errors (one line per error, but the lines are not properly separated in the output, because they are converted to ) and become really hard to read. If possible, the output should be converted to Unicode in this case. (previously reported as ) NOTE from Please install Visual Studio English Language Pack side by side with your favorite language pack for UI\nI\u2019m surprised we try to interpret any output in Windows as UTF-8 as opposed to UTF-16, like we should.\nI suspect that the MSVC compiler's raw output is encoded as Windows-1252 or Windows-1250 (for German) depending on the current console code page and not as UTF-16. Does solve the encoding problem?\nNo, neither in nor in (usually I'm working with the latter). It prints but the encoding problems persist.\nIs it just MSVC which is localized, or is your system codepage itself different? If we know that a certain program's output is the system codepage we could use with to convert it. Perhaps we could check whether the output is UTF-8, and attempt to do the codepage conversion if it isn't.\nI am using a German localized Windows. How can I find out the system codepage? The default CP for console windows is \"850 (OEM - Multilingual Lateinisch I)\".\nYou can call with or .\nThat returns codepage numbers 1252 for and 850 for .\nIn which case the output from the linker appears to be , so now someone just has to add code to rustc which detects when the linker output isn't utf-8 on windows and use with instead.\nI ran into this problem as well (also on a German Windows 10). As a workaround it helps if you go to in Win 10 to Settings -Region and Language -Language and add English as a language and make it default ( you may have to log out and back in again). After that, programs like should output using a locale that works with rust as it is right now. It would still be great if this could be fixed in rust :)\nJust saw another instance of this in , except with ) (a common encoding in China), where got printed instead of the desired .\nUsing nightly 2019-07-04, we can confirm that this issue is fixed. (I'm using Chinese version of Visual Studio).\nFor me, it still does not work even with nightly build (1.38 x86_64-pc-windows-msvc, Russian version of VS). I try to compile a Hello world project in Intellij Idea but always get the exception:\nCould you execute and see what that outputs?\nThat's a little strange. Could you open \"x64 Native Tools Command Prompt for Visual Studio 2019\" from start menu, and execute the following command: And see whether the output is English? If the output's still not in English, would you mind open \"Visual Studio Installer\" from start menu, choose \"Modify\" for the corresponding Visual Studio edition, and in the Language packs tab, install the English language pack. And try the instructions above again?\nthanks! The hint about installing the English language pack has made the previous error disappear. But now I have another message while running in Idea: I haven't changed the path to the std library, it's looks like\nYes, this is the same error as before, it's because you didn't install windows sdk. Choose the newest one in the visual studio installer too.\nthank you again. Now it's compiling and running. I'm happy ;) Just out of curiosity: was I supposed to know in advance that I would need an English lang pack and a Windows SDK when installing Visual Studio?\nPlease feel free to point out places where Rust tells you to install Visual Studio but does not tell you to install the Windows SDK, so that we can fix those places. It might still be worth implementing the text encoding version of the fix for people who don't have the English language pack, though I really wish we could just get unicode output out of VS without having to specify a language.\nsorry, seems I was a bit inattentive. Now I double-checked: after starting rustup- the screen info says, I should assure that I have Windows SDK installed. So my fault, nothing to fix there :) Thanks.\nThis is not a fix anymore.\nrefer to\nPlease make sure you have installed Visual Studio English Language Pack side by side with your favorite language pack for UI for this to work.\nI'm not sure this problem is really fixed. The PR that closed this kinda works around the issue because English is usually ASCII which is the same in UTF-8. However, as noted above, this relies on the Visual Studio English language pack being installed which it isn't always. And that there's no non-ascii output (e.g. file paths, etc). The suggestion to use with seems like a better fix, imho.", "positive_passages": [{"docid": "doc-en-rust-d75178a812a0c4446607096a6908b31cffdac187df1d605ac77b17af7f47a77a", "text": "version = \"0.30.1\" default-features = false features = [\"read_core\", \"elf\", \"macho\", \"pe\", \"unaligned\", \"archive\", \"write\"] [target.'cfg(windows)'.dependencies.windows] version = \"0.46.0\" features = [\"Win32_Globalization\"] ", "commid": "rust_pr_110586"}], "negative_passages": []}
{"query_id": "q-en-rust-65b77b7e06ebe16e3cdb656b584dc7bc041446f7fa3f3531ba29eb2e11ea7c50", "query": "A localized (e.g. German) MSVC can produce non-UTF-8 output, which can become almost unreadable in the way it's currently forwarded to the user. This can be seen in an (otherwise unrelated) issue comment: Note especially this line: That output might be a lot longer for multiple LNK errors (one line per error, but the lines are not properly separated in the output, because they are converted to ) and become really hard to read. If possible, the output should be converted to Unicode in this case. (previously reported as ) NOTE from Please install Visual Studio English Language Pack side by side with your favorite language pack for UI\nI\u2019m surprised we try to interpret any output in Windows as UTF-8 as opposed to UTF-16, like we should.\nI suspect that the MSVC compiler's raw output is encoded as Windows-1252 or Windows-1250 (for German) depending on the current console code page and not as UTF-16. Does solve the encoding problem?\nNo, neither in nor in (usually I'm working with the latter). It prints but the encoding problems persist.\nIs it just MSVC which is localized, or is your system codepage itself different? If we know that a certain program's output is the system codepage we could use with to convert it. Perhaps we could check whether the output is UTF-8, and attempt to do the codepage conversion if it isn't.\nI am using a German localized Windows. How can I find out the system codepage? The default CP for console windows is \"850 (OEM - Multilingual Lateinisch I)\".\nYou can call with or .\nThat returns codepage numbers 1252 for and 850 for .\nIn which case the output from the linker appears to be , so now someone just has to add code to rustc which detects when the linker output isn't utf-8 on windows and use with instead.\nI ran into this problem as well (also on a German Windows 10). As a workaround it helps if you go to in Win 10 to Settings -Region and Language -Language and add English as a language and make it default ( you may have to log out and back in again). After that, programs like should output using a locale that works with rust as it is right now. It would still be great if this could be fixed in rust :)\nJust saw another instance of this in , except with ) (a common encoding in China), where got printed instead of the desired .\nUsing nightly 2019-07-04, we can confirm that this issue is fixed. (I'm using Chinese version of Visual Studio).\nFor me, it still does not work even with nightly build (1.38 x86_64-pc-windows-msvc, Russian version of VS). I try to compile a Hello world project in Intellij Idea but always get the exception:\nCould you execute and see what that outputs?\nThat's a little strange. Could you open \"x64 Native Tools Command Prompt for Visual Studio 2019\" from start menu, and execute the following command: And see whether the output is English? If the output's still not in English, would you mind open \"Visual Studio Installer\" from start menu, choose \"Modify\" for the corresponding Visual Studio edition, and in the Language packs tab, install the English language pack. And try the instructions above again?\nthanks! The hint about installing the English language pack has made the previous error disappear. But now I have another message while running in Idea: I haven't changed the path to the std library, it's looks like\nYes, this is the same error as before, it's because you didn't install windows sdk. Choose the newest one in the visual studio installer too.\nthank you again. Now it's compiling and running. I'm happy ;) Just out of curiosity: was I supposed to know in advance that I would need an English lang pack and a Windows SDK when installing Visual Studio?\nPlease feel free to point out places where Rust tells you to install Visual Studio but does not tell you to install the Windows SDK, so that we can fix those places. It might still be worth implementing the text encoding version of the fix for people who don't have the English language pack, though I really wish we could just get unicode output out of VS without having to specify a language.\nsorry, seems I was a bit inattentive. Now I double-checked: after starting rustup- the screen info says, I should assure that I have Windows SDK installed. So my fault, nothing to fix there :) Thanks.\nThis is not a fix anymore.\nrefer to\nPlease make sure you have installed Visual Studio English Language Pack side by side with your favorite language pack for UI for this to work.\nI'm not sure this problem is really fixed. The PR that closed this kinda works around the issue because English is usually ASCII which is the same in UTF-8. However, as noted above, this relies on the Visual Studio English language pack being installed which it isn't always. And that there's no non-ascii output (e.g. file paths, etc). The suggestion to use with seems like a better fix, imho.", "positive_passages": [{"docid": "doc-en-rust-6051f9024d454ecd6c0122ea5d2a3ff370f4690246f59e55aeb952e2ecabf673", "text": "if !prog.status.success() { let mut output = prog.stderr.clone(); output.extend_from_slice(&prog.stdout); let escaped_output = escape_string(&output); let escaped_output = escape_linker_output(&output, flavor); // FIXME: Add UI tests for this error. let err = errors::LinkingFailed { linker_path: &linker_path,", "commid": "rust_pr_110586"}], "negative_passages": []}
{"query_id": "q-en-rust-65b77b7e06ebe16e3cdb656b584dc7bc041446f7fa3f3531ba29eb2e11ea7c50", "query": "A localized (e.g. German) MSVC can produce non-UTF-8 output, which can become almost unreadable in the way it's currently forwarded to the user. This can be seen in an (otherwise unrelated) issue comment: Note especially this line: That output might be a lot longer for multiple LNK errors (one line per error, but the lines are not properly separated in the output, because they are converted to ) and become really hard to read. If possible, the output should be converted to Unicode in this case. (previously reported as ) NOTE from Please install Visual Studio English Language Pack side by side with your favorite language pack for UI\nI\u2019m surprised we try to interpret any output in Windows as UTF-8 as opposed to UTF-16, like we should.\nI suspect that the MSVC compiler's raw output is encoded as Windows-1252 or Windows-1250 (for German) depending on the current console code page and not as UTF-16. Does solve the encoding problem?\nNo, neither in nor in (usually I'm working with the latter). It prints but the encoding problems persist.\nIs it just MSVC which is localized, or is your system codepage itself different? If we know that a certain program's output is the system codepage we could use with to convert it. Perhaps we could check whether the output is UTF-8, and attempt to do the codepage conversion if it isn't.\nI am using a German localized Windows. How can I find out the system codepage? The default CP for console windows is \"850 (OEM - Multilingual Lateinisch I)\".\nYou can call with or .\nThat returns codepage numbers 1252 for and 850 for .\nIn which case the output from the linker appears to be , so now someone just has to add code to rustc which detects when the linker output isn't utf-8 on windows and use with instead.\nI ran into this problem as well (also on a German Windows 10). As a workaround it helps if you go to in Win 10 to Settings -Region and Language -Language and add English as a language and make it default ( you may have to log out and back in again). After that, programs like should output using a locale that works with rust as it is right now. It would still be great if this could be fixed in rust :)\nJust saw another instance of this in , except with ) (a common encoding in China), where got printed instead of the desired .\nUsing nightly 2019-07-04, we can confirm that this issue is fixed. (I'm using Chinese version of Visual Studio).\nFor me, it still does not work even with nightly build (1.38 x86_64-pc-windows-msvc, Russian version of VS). I try to compile a Hello world project in Intellij Idea but always get the exception:\nCould you execute and see what that outputs?\nThat's a little strange. Could you open \"x64 Native Tools Command Prompt for Visual Studio 2019\" from start menu, and execute the following command: And see whether the output is English? If the output's still not in English, would you mind open \"Visual Studio Installer\" from start menu, choose \"Modify\" for the corresponding Visual Studio edition, and in the Language packs tab, install the English language pack. And try the instructions above again?\nthanks! The hint about installing the English language pack has made the previous error disappear. But now I have another message while running in Idea: I haven't changed the path to the std library, it's looks like\nYes, this is the same error as before, it's because you didn't install windows sdk. Choose the newest one in the visual studio installer too.\nthank you again. Now it's compiling and running. I'm happy ;) Just out of curiosity: was I supposed to know in advance that I would need an English lang pack and a Windows SDK when installing Visual Studio?\nPlease feel free to point out places where Rust tells you to install Visual Studio but does not tell you to install the Windows SDK, so that we can fix those places. It might still be worth implementing the text encoding version of the fix for people who don't have the English language pack, though I really wish we could just get unicode output out of VS without having to specify a language.\nsorry, seems I was a bit inattentive. Now I double-checked: after starting rustup- the screen info says, I should assure that I have Windows SDK installed. So my fault, nothing to fix there :) Thanks.\nThis is not a fix anymore.\nrefer to\nPlease make sure you have installed Visual Studio English Language Pack side by side with your favorite language pack for UI for this to work.\nI'm not sure this problem is really fixed. The PR that closed this kinda works around the issue because English is usually ASCII which is the same in UTF-8. However, as noted above, this relies on the Visual Studio English language pack being installed which it isn't always. And that there's no non-ascii output (e.g. file paths, etc). The suggestion to use with seems like a better fix, imho.", "positive_passages": [{"docid": "doc-en-rust-e31fad30eb1a03dc28c092f0cd0313b70028c639e3f1d04d78cf3128ef1f4854", "text": "} } #[cfg(not(windows))] fn escape_linker_output(s: &[u8], _flavour: LinkerFlavor) -> String { escape_string(s) } /// If the output of the msvc linker is not UTF-8 and the host is Windows, /// then try to convert the string from the OEM encoding. #[cfg(windows)] fn escape_linker_output(s: &[u8], flavour: LinkerFlavor) -> String { // This only applies to the actual MSVC linker. if flavour != LinkerFlavor::Msvc(Lld::No) { return escape_string(s); } match str::from_utf8(s) { Ok(s) => return s.to_owned(), Err(_) => match win::locale_byte_str_to_string(s, win::oem_code_page()) { Some(s) => s, // The string is not UTF-8 and isn't valid for the OEM code page None => format!(\"Non-UTF-8 output: {}\", s.escape_ascii()), }, } } /// Wrappers around the Windows API. #[cfg(windows)] mod win { use windows::Win32::Globalization::{ GetLocaleInfoEx, MultiByteToWideChar, CP_OEMCP, LOCALE_IUSEUTF8LEGACYOEMCP, LOCALE_NAME_SYSTEM_DEFAULT, LOCALE_RETURN_NUMBER, MB_ERR_INVALID_CHARS, }; /// Get the Windows system OEM code page. This is most notably the code page /// used for link.exe's output. pub fn oem_code_page() -> u32 { unsafe { let mut cp: u32 = 0; // We're using the `LOCALE_RETURN_NUMBER` flag to return a u32. // But the API requires us to pass the data as though it's a [u16] string. let len = std::mem::size_of:: sess.note_without_error(\"the msvc targets depend on the msvc linker but `link.exe` was not found\"); sess.note_without_error(\"please ensure that VS 2013, VS 2015 or VS 2017 was installed with the Visual C++ option\"); sess.note_without_error( \"the msvc targets depend on the msvc linker but `link.exe` was not found\", ); sess.note_without_error( \"please ensure that VS 2013, VS 2015, VS 2017 or VS 2019 was installed with the Visual C++ option\", ); } sess.abort_if_errors(); }", "commid": "rust_pr_62021"}], "negative_passages": []}
{"query_id": "q-en-rust-65b77b7e06ebe16e3cdb656b584dc7bc041446f7fa3f3531ba29eb2e11ea7c50", "query": "A localized (e.g. German) MSVC can produce non-UTF-8 output, which can become almost unreadable in the way it's currently forwarded to the user. This can be seen in an (otherwise unrelated) issue comment: Note especially this line: That output might be a lot longer for multiple LNK errors (one line per error, but the lines are not properly separated in the output, because they are converted to ) and become really hard to read. If possible, the output should be converted to Unicode in this case. (previously reported as ) NOTE from Please install Visual Studio English Language Pack side by side with your favorite language pack for UI\nI\u2019m surprised we try to interpret any output in Windows as UTF-8 as opposed to UTF-16, like we should.\nI suspect that the MSVC compiler's raw output is encoded as Windows-1252 or Windows-1250 (for German) depending on the current console code page and not as UTF-16. Does solve the encoding problem?\nNo, neither in nor in (usually I'm working with the latter). It prints but the encoding problems persist.\nIs it just MSVC which is localized, or is your system codepage itself different? If we know that a certain program's output is the system codepage we could use with to convert it. Perhaps we could check whether the output is UTF-8, and attempt to do the codepage conversion if it isn't.\nI am using a German localized Windows. How can I find out the system codepage? The default CP for console windows is \"850 (OEM - Multilingual Lateinisch I)\".\nYou can call with or .\nThat returns codepage numbers 1252 for and 850 for .\nIn which case the output from the linker appears to be , so now someone just has to add code to rustc which detects when the linker output isn't utf-8 on windows and use with instead.\nI ran into this problem as well (also on a German Windows 10). As a workaround it helps if you go to in Win 10 to Settings -Region and Language -Language and add English as a language and make it default ( you may have to log out and back in again). After that, programs like should output using a locale that works with rust as it is right now. It would still be great if this could be fixed in rust :)\nJust saw another instance of this in , except with ) (a common encoding in China), where got printed instead of the desired .\nUsing nightly 2019-07-04, we can confirm that this issue is fixed. (I'm using Chinese version of Visual Studio).\nFor me, it still does not work even with nightly build (1.38 x86_64-pc-windows-msvc, Russian version of VS). I try to compile a Hello world project in Intellij Idea but always get the exception:\nCould you execute and see what that outputs?\nThat's a little strange. Could you open \"x64 Native Tools Command Prompt for Visual Studio 2019\" from start menu, and execute the following command: And see whether the output is English? If the output's still not in English, would you mind open \"Visual Studio Installer\" from start menu, choose \"Modify\" for the corresponding Visual Studio edition, and in the Language packs tab, install the English language pack. And try the instructions above again?\nthanks! The hint about installing the English language pack has made the previous error disappear. But now I have another message while running in Idea: I haven't changed the path to the std library, it's looks like\nYes, this is the same error as before, it's because you didn't install windows sdk. Choose the newest one in the visual studio installer too.\nthank you again. Now it's compiling and running. I'm happy ;) Just out of curiosity: was I supposed to know in advance that I would need an English lang pack and a Windows SDK when installing Visual Studio?\nPlease feel free to point out places where Rust tells you to install Visual Studio but does not tell you to install the Windows SDK, so that we can fix those places. It might still be worth implementing the text encoding version of the fix for people who don't have the English language pack, though I really wish we could just get unicode output out of VS without having to specify a language.\nsorry, seems I was a bit inattentive. Now I double-checked: after starting rustup- the screen info says, I should assure that I have Windows SDK installed. So my fault, nothing to fix there :) Thanks.\nThis is not a fix anymore.\nrefer to\nPlease make sure you have installed Visual Studio English Language Pack side by side with your favorite language pack for UI for this to work.\nI'm not sure this problem is really fixed. The PR that closed this kinda works around the issue because English is usually ASCII which is the same in UTF-8. However, as noted above, this relies on the Visual Studio English language pack being installed which it isn't always. And that there's no non-ascii output (e.g. file paths, etc). The suggestion to use with seems like a better fix, imho.", "positive_passages": [{"docid": "doc-en-rust-110741c6b5677bb67eda59812bd0aafcb890919d102f39238e5c95969a5b756b", "text": "target_family: Some(\"windows\".to_string()), is_like_windows: true, is_like_msvc: true, // set VSLANG to 1033 can prevent link.exe from using // language packs, and avoid generating Non-UTF-8 error // messages if a link error occurred. link_env: vec![(\"VSLANG\".to_string(), \"1033\".to_string())], pre_link_args: args, crt_static_allows_dylibs: true, crt_static_respected: true,", "commid": "rust_pr_62021"}], "negative_passages": []}
{"query_id": "q-en-rust-0483b61e9654161151d5b42d5b803a6be5ccea3c7401c4a0380da8c5c3e28b1c", "query": "Full code: Given a smart pointer, which only works on types implementing the trait or on trait objects, such that is allowed to coerce to , and the corresponding implementation : Applying the operator on two will try to coerce them to , even though this isn't needed. This is an issue since coercion moves the original pointer. Removing the implementation lets the code build, which means that the coercion is not needed. As , if the is restricted to pointers with identical types (), then no coercion occurs. $DIR/issue-42060.rs:13:23 | LL | let other: typeof(thing) = thing; //~ ERROR attempt to use a non-constant value in a constant | ^^^^^ non-constant value error[E0435]: attempt to use a non-constant value in a constant --> $DIR/issue-42060.rs:19:13 | LL | use super::MirContext; use super::{MirContext, LocalRef}; use super::constant::const_scalar_checked_binop; use super::operand::{OperandRef, OperandValue}; use super::lvalue::LvalueRef;", "commid": "rust_pr_44060"}], "negative_passages": []}
{"query_id": "q-en-rust-ff3d0d1f9967b6655cfb6db452de61f802dc0c70856137ca311fce2372a9328e", "query": "This code: gives this ICE: Doesn't happen if you use e.g. instead of , or use instead of .\nMinimized to:\nFixed in .", "positive_passages": [{"docid": "doc-en-rust-4f67c1c402ed652ffca2f3f4a25bcad630cbbe057da44b43bab5660935a8f894", "text": "} mir::Rvalue::Len(ref lvalue) => { let tr_lvalue = self.trans_lvalue(&bcx, lvalue); let size = self.evaluate_array_len(&bcx, lvalue); let operand = OperandRef { val: OperandValue::Immediate(tr_lvalue.len(bcx.ccx)), val: OperandValue::Immediate(size), ty: bcx.tcx().types.usize, }; (bcx, operand)", "commid": "rust_pr_44060"}], "negative_passages": []}
{"query_id": "q-en-rust-ff3d0d1f9967b6655cfb6db452de61f802dc0c70856137ca311fce2372a9328e", "query": "This code: gives this ICE: Doesn't happen if you use e.g. instead of , or use instead of .\nMinimized to:\nFixed in .", "positive_passages": [{"docid": "doc-en-rust-9e548218d033d975c449d10e00be7fa96561bdff1ac49c328e3a9338ac7a1df7", "text": "} } fn evaluate_array_len(&mut self, bcx: &Builder<'a, 'tcx>, lvalue: &mir::Lvalue<'tcx>) -> ValueRef { // ZST are passed as operands and require special handling // because trans_lvalue() panics if Local is operand. if let mir::Lvalue::Local(index) = *lvalue { if let LocalRef::Operand(Some(op)) = self.locals[index] { if common::type_is_zero_size(bcx.ccx, op.ty) { if let ty::TyArray(_, n) = op.ty.sty { return common::C_uint(bcx.ccx, n); } } } } // use common size calculation for non zero-sized types let tr_value = self.trans_lvalue(&bcx, lvalue); return tr_value.len(bcx.ccx); } pub fn trans_scalar_binop(&mut self, bcx: &Builder<'a, 'tcx>, op: mir::BinOp,", "commid": "rust_pr_44060"}], "negative_passages": []}
{"query_id": "q-en-rust-ff3d0d1f9967b6655cfb6db452de61f802dc0c70856137ca311fce2372a9328e", "query": "This code: gives this ICE: Doesn't happen if you use e.g. instead of , or use instead of .\nMinimized to:\nFixed in .", "positive_passages": [{"docid": "doc-en-rust-44ae3f81bea7de82a8daa5e2465e063bbbf62931b384b9b24526faed6a52a717", "text": " // Copyright 2016 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0 COPY dist-i686-freebsd/build-toolchain.sh /tmp/ RUN /tmp/build-toolchain.sh i686 COPY scripts/freebsd-toolchain.sh /tmp/ RUN /tmp/freebsd-toolchain.sh i686 COPY scripts/sccache.sh /scripts/ RUN sh /scripts/sccache.sh ENV AR_i686_unknown_freebsd=i686-unknown-freebsd10-ar CC_i686_unknown_freebsd=i686-unknown-freebsd10-gcc CXX_i686_unknown_freebsd=i686-unknown-freebsd10-g++ CC_i686_unknown_freebsd=i686-unknown-freebsd10-clang CXX_i686_unknown_freebsd=i686-unknown-freebsd10-clang++ ENV HOSTS=i686-unknown-freebsd", "commid": "rust_pr_46941"}], "negative_passages": []}
{"query_id": "q-en-rust-2779eafb2ac6e0a9082d7a8aae9add135f589d5d5b723f4540b45d54e7cd7aa0", "query": "Steps to reproduce, on my FreeBSD/amd64 11.0 system: , where is: It's not 100%, but it usually happens after a few tries at most. I'll skip all the debugging and source-diving and just say that it's the same basic problem as : threads A and B are both in , thread A gets to first and s a lambda, then thread B enters and blocks in with the global mutex held, then thread A calls reentrantly on a different and deadlocks. But there might be a nicer solution than for , because has this patch: , on all platforms where they use Clang (apparently x86, little-endian ARM, and PPC), and 10.x is now the . appears to have a self-contained that doesn't use , and emprically the Rust 1.18.0 package available via doesn't deadlock. So, it might be enough to upstream that patch and update the build environment to 10.x if it isn't already.\nLooks like we're using 10.2 on ci:\nThanks for the pointer. I took a (much) closer look at this, and I've gotten the build more or less working using Ubuntu's regular Clang package, which can cross-compile out of the box. I'll send a PR once I've gotten things cleaned up and tested a bit more.\nSo I'm kind of stuck because, when I try to test locally, I run into unless I remove , which isn't maximally useful because I don't have Cargo and therefore can't build Cargo. This happens even without my changes, but the same container running on Travis CI seems to work, and I'm at a loss as to what differences there could be that would affect (apparently?) the type checker. I suppose I could just test what I can and send a PR and hope for the best, but if possible I'd prefer to understand what my local setup is doing wrong.\nNoticed this on 12-CURRENT, compilers from rustup always lock up when compiling , but it usually succeeds after a couple tries. It's nearly always !\nHmm, the rust from ports (that doesn't have this problem IIRC??) is built with bundled llvm by default\u2026", "positive_passages": [{"docid": "doc-en-rust-f43be0dd38b3adfdd81d77e16569ad2dac730954e961aea7c03052de7a95768c", "text": "FROM ubuntu:16.04 RUN apt-get update && apt-get install -y --no-install-recommends g++ clang make file curl ", "commid": "rust_pr_46941"}], "negative_passages": []}
{"query_id": "q-en-rust-2779eafb2ac6e0a9082d7a8aae9add135f589d5d5b723f4540b45d54e7cd7aa0", "query": "Steps to reproduce, on my FreeBSD/amd64 11.0 system: , where is: It's not 100%, but it usually happens after a few tries at most. I'll skip all the debugging and source-diving and just say that it's the same basic problem as : threads A and B are both in , thread A gets to first and s a lambda, then thread B enters and blocks in with the global mutex held, then thread A calls reentrantly on a different and deadlocks. But there might be a nicer solution than for , because has this patch: , on all platforms where they use Clang (apparently x86, little-endian ARM, and PPC), and 10.x is now the . appears to have a self-contained that doesn't use , and emprically the Rust 1.18.0 package available via doesn't deadlock. So, it might be enough to upstream that patch and update the build environment to 10.x if it isn't already.\nLooks like we're using 10.2 on ci:\nThanks for the pointer. I took a (much) closer look at this, and I've gotten the build more or less working using Ubuntu's regular Clang package, which can cross-compile out of the box. I'll send a PR once I've gotten things cleaned up and tested a bit more.\nSo I'm kind of stuck because, when I try to test locally, I run into unless I remove , which isn't maximally useful because I don't have Cargo and therefore can't build Cargo. This happens even without my changes, but the same container running on Travis CI seems to work, and I'm at a loss as to what differences there could be that would affect (apparently?) the type checker. I suppose I could just test what I can and send a PR and hope for the best, but if possible I'd prefer to understand what my local setup is doing wrong.\nNoticed this on 12-CURRENT, compilers from rustup always lock up when compiling , but it usually succeeds after a couple tries. It's nearly always !\nHmm, the rust from ports (that doesn't have this problem IIRC??) is built with bundled llvm by default\u2026", "positive_passages": [{"docid": "doc-en-rust-cb739ac321b2600af9dca769b083f2af653cc2705f15c2af8f01c11be1811825", "text": "libssl-dev pkg-config COPY dist-x86_64-freebsd/build-toolchain.sh /tmp/ RUN /tmp/build-toolchain.sh x86_64 COPY scripts/freebsd-toolchain.sh /tmp/ RUN /tmp/freebsd-toolchain.sh x86_64 COPY scripts/sccache.sh /scripts/ RUN sh /scripts/sccache.sh ENV AR_x86_64_unknown_freebsd=x86_64-unknown-freebsd10-ar CC_x86_64_unknown_freebsd=x86_64-unknown-freebsd10-gcc CXX_x86_64_unknown_freebsd=x86_64-unknown-freebsd10-g++ CC_x86_64_unknown_freebsd=x86_64-unknown-freebsd10-clang CXX_x86_64_unknown_freebsd=x86_64-unknown-freebsd10-clang++ ENV HOSTS=x86_64-unknown-freebsd", "commid": "rust_pr_46941"}], "negative_passages": []}
{"query_id": "q-en-rust-2779eafb2ac6e0a9082d7a8aae9add135f589d5d5b723f4540b45d54e7cd7aa0", "query": "Steps to reproduce, on my FreeBSD/amd64 11.0 system: , where is: It's not 100%, but it usually happens after a few tries at most. I'll skip all the debugging and source-diving and just say that it's the same basic problem as : threads A and B are both in , thread A gets to first and s a lambda, then thread B enters and blocks in with the global mutex held, then thread A calls reentrantly on a different and deadlocks. But there might be a nicer solution than for , because has this patch: , on all platforms where they use Clang (apparently x86, little-endian ARM, and PPC), and 10.x is now the . appears to have a self-contained that doesn't use , and emprically the Rust 1.18.0 package available via doesn't deadlock. So, it might be enough to upstream that patch and update the build environment to 10.x if it isn't already.\nLooks like we're using 10.2 on ci:\nThanks for the pointer. I took a (much) closer look at this, and I've gotten the build more or less working using Ubuntu's regular Clang package, which can cross-compile out of the box. I'll send a PR once I've gotten things cleaned up and tested a bit more.\nSo I'm kind of stuck because, when I try to test locally, I run into unless I remove , which isn't maximally useful because I don't have Cargo and therefore can't build Cargo. This happens even without my changes, but the same container running on Travis CI seems to work, and I'm at a loss as to what differences there could be that would affect (apparently?) the type checker. I suppose I could just test what I can and send a PR and hope for the best, but if possible I'd prefer to understand what my local setup is doing wrong.\nNoticed this on 12-CURRENT, compilers from rustup always lock up when compiling , but it usually succeeds after a couple tries. It's nearly always !\nHmm, the rust from ports (that doesn't have this problem IIRC??) is built with bundled llvm by default\u2026", "positive_passages": [{"docid": "doc-en-rust-e458fc54eefaa2caed11e4bce9257a5a3063f2040aac4893266093929989f446", "text": " #!/usr/bin/env bash # Copyright 2016 The Rust Project Developers. See the COPYRIGHT # file at the top-level directory of this distribution and at # http://rust-lang.org/COPYRIGHT. # # Licensed under the Apache License, Version 2.0 ", "commid": "rust_pr_46941"}], "negative_passages": []}
{"query_id": "q-en-rust-2779eafb2ac6e0a9082d7a8aae9add135f589d5d5b723f4540b45d54e7cd7aa0", "query": "Steps to reproduce, on my FreeBSD/amd64 11.0 system: , where is: It's not 100%, but it usually happens after a few tries at most. I'll skip all the debugging and source-diving and just say that it's the same basic problem as : threads A and B are both in , thread A gets to first and s a lambda, then thread B enters and blocks in with the global mutex held, then thread A calls reentrantly on a different and deadlocks. But there might be a nicer solution than for , because has this patch: , on all platforms where they use Clang (apparently x86, little-endian ARM, and PPC), and 10.x is now the . appears to have a self-contained that doesn't use , and emprically the Rust 1.18.0 package available via doesn't deadlock. So, it might be enough to upstream that patch and update the build environment to 10.x if it isn't already.\nLooks like we're using 10.2 on ci:\nThanks for the pointer. I took a (much) closer look at this, and I've gotten the build more or less working using Ubuntu's regular Clang package, which can cross-compile out of the box. I'll send a PR once I've gotten things cleaned up and tested a bit more.\nSo I'm kind of stuck because, when I try to test locally, I run into unless I remove , which isn't maximally useful because I don't have Cargo and therefore can't build Cargo. This happens even without my changes, but the same container running on Travis CI seems to work, and I'm at a loss as to what differences there could be that would affect (apparently?) the type checker. I suppose I could just test what I can and send a PR and hope for the best, but if possible I'd prefer to understand what my local setup is doing wrong.\nNoticed this on 12-CURRENT, compilers from rustup always lock up when compiling , but it usually succeeds after a couple tries. It's nearly always !\nHmm, the rust from ports (that doesn't have this problem IIRC??) is built with bundled llvm by default\u2026", "positive_passages": [{"docid": "doc-en-rust-7eb4064eb5f35b9e9996e162c41bd72e4ff71c09389c24d1437a5c67120c9db1", "text": " #!/bin/bash # Copyright 2016-2017 The Rust Project Developers. See the COPYRIGHT # file at the top-level directory of this distribution and at # http://rust-lang.org/COPYRIGHT. # # Licensed under the Apache License, Version 2.0 version = \"0.8.2\" source = \"registry+https://github.com/rust-lang/crates.io-index\" [[package]] name = \"bitflags\" version = \"0.9.1\" source = \"registry+https://github.com/rust-lang/crates.io-index\"", "commid": "rust_pr_45421"}], "negative_passages": []}
{"query_id": "q-en-rust-6b3140c4ac37a443418f17da0cb04077cfc140e10f76134da465372bc66b86a3", "query": "generates different HTML under and not. Not only does this trigger what's effectively a spurious warning, but it doesn't render correctly either. cc , marking this as p-high\nThis was fixed by google/pulldown-cmark rustdoc just needs to update its version of pulldown-cmark.\nThe thread in that issue says it was released in 0.0.15. we're pulling in 0.0.14 for rustdoc, but also 0.1.0 for mdbook. I'll see if i can update the dependency for rustdoc and pull it in.\nupdates Pulldown for rustdoc and fixes this.\nI am not sure this actually fixes the warnings, as they use different IDs and such. I'm testing this locally right now though; expect an update in ~20 minutes", "positive_passages": [{"docid": "doc-en-rust-dc8f317de13abe52e63288ce62a368e745646ac1f00b40845f83e1fc4f58338e", "text": "[[package]] name = \"pulldown-cmark\" version = \"0.0.14\" source = \"registry+https://github.com/rust-lang/crates.io-index\" dependencies = [ \"bitflags 0.8.2 (registry+https://github.com/rust-lang/crates.io-index)\", ] [[package]] name = \"pulldown-cmark\" version = \"0.0.15\" source = \"registry+https://github.com/rust-lang/crates.io-index\" dependencies = [", "commid": "rust_pr_45421"}], "negative_passages": []}
{"query_id": "q-en-rust-6b3140c4ac37a443418f17da0cb04077cfc140e10f76134da465372bc66b86a3", "query": "generates different HTML under and not. Not only does this trigger what's effectively a spurious warning, but it doesn't render correctly either. cc , marking this as p-high\nThis was fixed by google/pulldown-cmark rustdoc just needs to update its version of pulldown-cmark.\nThe thread in that issue says it was released in 0.0.15. we're pulling in 0.0.14 for rustdoc, but also 0.1.0 for mdbook. I'll see if i can update the dependency for rustdoc and pull it in.\nupdates Pulldown for rustdoc and fixes this.\nI am not sure this actually fixes the warnings, as they use different IDs and such. I'm testing this locally right now though; expect an update in ~20 minutes", "positive_passages": [{"docid": "doc-en-rust-0d2b9be350dbea0ffe71875b2643413cbd75b05cf8bc0fd3b2b3fd859782d124", "text": "\"env_logger 0.4.3 (registry+https://github.com/rust-lang/crates.io-index)\", \"html-diff 0.0.4 (registry+https://github.com/rust-lang/crates.io-index)\", \"log 0.3.8 (registry+https://github.com/rust-lang/crates.io-index)\", \"pulldown-cmark 0.0.14 (registry+https://github.com/rust-lang/crates.io-index)\", \"pulldown-cmark 0.1.0 (registry+https://github.com/rust-lang/crates.io-index)\", ] [[package]]", "commid": "rust_pr_45421"}], "negative_passages": []}
{"query_id": "q-en-rust-6b3140c4ac37a443418f17da0cb04077cfc140e10f76134da465372bc66b86a3", "query": "generates different HTML under and not. Not only does this trigger what's effectively a spurious warning, but it doesn't render correctly either. cc , marking this as p-high\nThis was fixed by google/pulldown-cmark rustdoc just needs to update its version of pulldown-cmark.\nThe thread in that issue says it was released in 0.0.15. we're pulling in 0.0.14 for rustdoc, but also 0.1.0 for mdbook. I'll see if i can update the dependency for rustdoc and pull it in.\nupdates Pulldown for rustdoc and fixes this.\nI am not sure this actually fixes the warnings, as they use different IDs and such. I'm testing this locally right now though; expect an update in ~20 minutes", "positive_passages": [{"docid": "doc-en-rust-64ee7aff51da1c035cec665b1ad6d61948954f7ab089de3bed35073bf0be7385", "text": "\"checksum backtrace 0.3.3 (registry+https://github.com/rust-lang/crates.io-index)\" = \"99f2ce94e22b8e664d95c57fff45b98a966c2252b60691d0b7aeeccd88d70983\" \"checksum backtrace-sys 0.1.14 (registry+https://github.com/rust-lang/crates.io-index)\" = \"c63ea141ef8fdb10409d0f5daf30ac51f84ef43bff66f16627773d2a292cd189\" \"checksum bitflags 0.7.0 (registry+https://github.com/rust-lang/crates.io-index)\" = \"aad18937a628ec6abcd26d1489012cc0e18c21798210f491af69ded9b881106d\" \"checksum bitflags 0.8.2 (registry+https://github.com/rust-lang/crates.io-index)\" = \"1370e9fc2a6ae53aea8b7a5110edbd08836ed87c88736dfabccade1c2b44bff4\" \"checksum bitflags 0.9.1 (registry+https://github.com/rust-lang/crates.io-index)\" = \"4efd02e230a02e18f92fc2735f44597385ed02ad8f831e7c1c1156ee5e1ab3a5\" \"checksum bitflags 1.0.0 (registry+https://github.com/rust-lang/crates.io-index)\" = \"f5cde24d1b2e2216a726368b2363a273739c91f4e3eb4e0dd12d672d396ad989\" \"checksum bufstream 0.1.3 (registry+https://github.com/rust-lang/crates.io-index)\" = \"f2f382711e76b9de6c744cc00d0497baba02fb00a787f088c879f01d09468e32\"", "commid": "rust_pr_45421"}], "negative_passages": []}
{"query_id": "q-en-rust-6b3140c4ac37a443418f17da0cb04077cfc140e10f76134da465372bc66b86a3", "query": "generates different HTML under and not. Not only does this trigger what's effectively a spurious warning, but it doesn't render correctly either. cc , marking this as p-high\nThis was fixed by google/pulldown-cmark rustdoc just needs to update its version of pulldown-cmark.\nThe thread in that issue says it was released in 0.0.15. we're pulling in 0.0.14 for rustdoc, but also 0.1.0 for mdbook. I'll see if i can update the dependency for rustdoc and pull it in.\nupdates Pulldown for rustdoc and fixes this.\nI am not sure this actually fixes the warnings, as they use different IDs and such. I'm testing this locally right now though; expect an update in ~20 minutes", "positive_passages": [{"docid": "doc-en-rust-3efae0781d9bea1f58fd26275198b7fd5ede569569c5373638cad8895b714b48", "text": "\"checksum precomputed-hash 0.1.0 (registry+https://github.com/rust-lang/crates.io-index)\" = \"cdf1fc3616b3ef726a847f2cd2388c646ef6a1f1ba4835c2629004da48184150\" \"checksum procedural-masquerade 0.1.2 (registry+https://github.com/rust-lang/crates.io-index)\" = \"c93cdc1fb30af9ddf3debc4afbdb0f35126cbd99daa229dd76cdd5349b41d989\" \"checksum psapi-sys 0.1.0 (registry+https://github.com/rust-lang/crates.io-index)\" = \"abcd5d1a07d360e29727f757a9decb3ce8bc6e0efa8969cfaad669a8317a2478\" \"checksum pulldown-cmark 0.0.14 (registry+https://github.com/rust-lang/crates.io-index)\" = \"d9ab1e588ef8efd702c7ed9d2bd774db5e6f4d878bb5a1a9f371828fbdff6973\" \"checksum pulldown-cmark 0.0.15 (registry+https://github.com/rust-lang/crates.io-index)\" = \"378e941dbd392c101f2cb88097fa4d7167bc421d4b88de3ff7dbee503bc3233b\" \"checksum pulldown-cmark 0.1.0 (registry+https://github.com/rust-lang/crates.io-index)\" = \"a656fdb8b6848f896df5e478a0eb9083681663e37dcb77dd16981ff65329fe8b\" \"checksum quick-error 1.2.1 (registry+https://github.com/rust-lang/crates.io-index)\" = \"eda5fe9b71976e62bc81b781206aaa076401769b2143379d3eb2118388babac4\"", "commid": "rust_pr_45421"}], "negative_passages": []}
{"query_id": "q-en-rust-6b3140c4ac37a443418f17da0cb04077cfc140e10f76134da465372bc66b86a3", "query": "generates different HTML under and not. Not only does this trigger what's effectively a spurious warning, but it doesn't render correctly either. cc , marking this as p-high\nThis was fixed by google/pulldown-cmark rustdoc just needs to update its version of pulldown-cmark.\nThe thread in that issue says it was released in 0.0.15. we're pulling in 0.0.14 for rustdoc, but also 0.1.0 for mdbook. I'll see if i can update the dependency for rustdoc and pull it in.\nupdates Pulldown for rustdoc and fixes this.\nI am not sure this actually fixes the warnings, as they use different IDs and such. I'm testing this locally right now though; expect an update in ~20 minutes", "positive_passages": [{"docid": "doc-en-rust-a243f1fb1d7282a33e98f2b9ba8242021d6edd4790a1108ddea1a68e6a4f366d", "text": "[dependencies] env_logger = { version = \"0.4\", default-features = false } log = \"0.3\" pulldown-cmark = { version = \"0.0.14\", default-features = false } pulldown-cmark = { version = \"0.1.0\", default-features = false } html-diff = \"0.0.4\" [build-dependencies]", "commid": "rust_pr_45421"}], "negative_passages": []}
{"query_id": "q-en-rust-6b3140c4ac37a443418f17da0cb04077cfc140e10f76134da465372bc66b86a3", "query": "generates different HTML under and not. Not only does this trigger what's effectively a spurious warning, but it doesn't render correctly either. cc , marking this as p-high\nThis was fixed by google/pulldown-cmark rustdoc just needs to update its version of pulldown-cmark.\nThe thread in that issue says it was released in 0.0.15. we're pulling in 0.0.14 for rustdoc, but also 0.1.0 for mdbook. I'll see if i can update the dependency for rustdoc and pull it in.\nupdates Pulldown for rustdoc and fixes this.\nI am not sure this actually fixes the warnings, as they use different IDs and such. I'm testing this locally right now though; expect an update in ~20 minutes", "positive_passages": [{"docid": "doc-en-rust-9bfbd62dd3e8c6443cbe0215b081fa95b7eb60020c3323f3286d56f2b3fcb23f", "text": "match self.inner.next() { Some(Event::FootnoteReference(ref reference)) => { let entry = self.get_entry(&reference); let reference = format!(\"{0} let reference = format!(\"{0} \"); for (mut content, id) in v {
write!(ret, \" write!(ret, \" \" \u21a9\", \" \u21a9\", id).unwrap(); if is_paragraph { ret.push_str(\"\");", "commid": "rust_pr_45421"}], "negative_passages": []}
{"query_id": "q-en-rust-de9df3815052fb4d4f0b86178e17f6faccc696328a590af5ce0248acf6328f62", "query": "Hey, I'm curious about the . From what I understand: this works because Path is a single member struct containing only the type it's being reinterpreted as. My original question on #rust IRC was: Is this always guaranteed to be correct (in Rust) in terms of memory layout? The discussion seemed to hint at that this is in fact not a guarantee, but that std is in a unique position being developed in concert with the language and any future breakage would be patched when it occurs. I would love some clarification if this usage is correct or to what degree it is not. I'd also suggest we add clarification around this case with comments or support functions to aid future spelunking into std.\nNo, strictly speaking, this is not safe without .\nexcellent, thank you! The seems to indicate things going slowly. Adding a comment might still be a good idea for now. Reading the RFC, it mentions at least ARM64 where there might be layout differences. But according to , std shows up as supported as a Tier 2 platform. This implies that at least for now, this pattern in this instance () works. Is this correct?\nThat ARM64 issue seems to be related to calling convention, not memory layout, so it's not an issue here.\nThis is my understanding as well. I believe it is safe with either repr(transparent) or repr(C), the latter of which is stable. The crate is a generalization of this safe cast.", "positive_passages": [{"docid": "doc-en-rust-6fb9f940d966b57cb742063c69d9a40c04806dd4416bdd27f9d73d16d5639ceb", "text": "} // See note at the top of this module to understand why these are used: // // These casts are safe as OsStr is internally a wrapper around [u8] on all // platforms. // // Note that currently this relies on the special knowledge that libstd has; // these types are single-element structs but are not marked repr(transparent) // or repr(C) which would make these casts allowable outside std. fn os_str_as_u8_slice(s: &OsStr) -> &[u8] { unsafe { &*(s as *const OsStr as *const [u8]) } }", "commid": "rust_pr_67635"}], "negative_passages": []}
{"query_id": "q-en-rust-888581a47955a17237fe4d92405e251b297168b89dff51ba03028fb200bf9668", "query": "The benchmarks for LinearMap and TreeMap should just be using the Map trait, but I encountered a strange borrow checking issue: diff: error: This seems like a borrowck bug, but I'm not sure. The same issue occurs when trying to work around it with a trait like with a method.", "positive_passages": [{"docid": "doc-en-rust-460114894fb10599b720633abe9819bc613d1f42e13bd34517ad5a88d7f5faa7", "text": " // Copyright 2012 The Rust Project Developers. See the COPYRIGHT // Copyright 2013 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. //", "commid": "rust_pr_5476"}], "negative_passages": []}
{"query_id": "q-en-rust-888581a47955a17237fe4d92405e251b297168b89dff51ba03028fb200bf9668", "query": "The benchmarks for LinearMap and TreeMap should just be using the Map trait, but I encountered a strange borrow checking issue: diff: error: This seems like a borrowck bug, but I'm not sure. The same issue occurs when trying to work around it with a trait like with a method.", "positive_passages": [{"docid": "doc-en-rust-d2b9a4fcf51ea22676a3f2c8bf063af47cb3cf4f7a58c04fd8000b8fc9201a5f", "text": "// except according to those terms. extern mod std; use std::oldmap; use std::treemap::TreeMap; use core::hashmap::linear::*; use core::io::WriterUtil; struct Results { sequential_ints: float, random_ints: float, delete_ints: float, sequential_strings: float, random_strings: float, delete_strings: float } fn timed(result: &mut float, op: &fn()) { let start = std::time::precise_time_s(); op(); let end = std::time::precise_time_s(); *result = (end - start); use core::io; use std::time; use std::treemap::TreeMap; use core::hashmap::linear::{LinearMap, LinearSet}; use core::trie::TrieMap; fn timed(label: &str, f: &fn()) { let start = time::precise_time_s(); f(); let end = time::precise_time_s(); io::println(fmt!(\" %s: %f\", label, end - start)); } fn old_int_benchmarks(rng: @rand::Rng, num_keys: uint, results: &mut Results) { { let map = oldmap::HashMap(); do timed(&mut results.sequential_ints) { for uint::range(0, num_keys) |i| { map.insert(i, i+1); } fn ascending for uint::range(0, num_keys) |i| { fail_unless!(map.get(&i) == i+1); } do timed(\"insert\") { for uint::range(0, n_keys) |i| { map.insert(i, i + 1); } } { let map = oldmap::HashMap(); do timed(&mut results.random_ints) { for uint::range(0, num_keys) |i| { map.insert(rng.next() as uint, i); } do timed(\"search\") { for uint::range(0, n_keys) |i| { fail_unless!(map.find(&i).unwrap() == &(i + 1)); } } { let map = oldmap::HashMap(); for uint::range(0, num_keys) |i| { map.insert(i, i);; } do timed(&mut results.delete_ints) { for uint::range(0, num_keys) |i| { fail_unless!(map.remove(&i)); } do timed(\"remove\") { for uint::range(0, n_keys) |i| { fail_unless!(map.remove(&i)); } } } fn old_str_benchmarks(rng: @rand::Rng, num_keys: uint, results: &mut Results) { { let map = oldmap::HashMap(); do timed(&mut results.sequential_strings) { for uint::range(0, num_keys) |i| { let s = uint::to_str(i); map.insert(s, i); } fn descending for uint::range(0, num_keys) |i| { let s = uint::to_str(i); fail_unless!(map.get(&s) == i); } do timed(\"insert\") { for uint::range(0, n_keys) |i| { map.insert(i, i + 1); } } { let map = oldmap::HashMap(); do timed(&mut results.random_strings) { for uint::range(0, num_keys) |i| { let s = uint::to_str(rng.next() as uint); map.insert(s, i); } do timed(\"search\") { for uint::range(0, n_keys) |i| { fail_unless!(map.find(&i).unwrap() == &(i + 1)); } } { let map = oldmap::HashMap(); for uint::range(0, num_keys) |i| { map.insert(uint::to_str(i), i); } do timed(&mut results.delete_strings) { for uint::range(0, num_keys) |i| { fail_unless!(map.remove(&uint::to_str(i))); } do timed(\"remove\") { for uint::range(0, n_keys) |i| { fail_unless!(map.remove(&i)); } } } fn linear_int_benchmarks(rng: @rand::Rng, num_keys: uint, results: &mut Results) { { let mut map = LinearMap::new(); do timed(&mut results.sequential_ints) { for uint::range(0, num_keys) |i| { map.insert(i, i+1); } fn vector for uint::range(0, num_keys) |i| { fail_unless!(map.find(&i).unwrap() == &(i+1)); } do timed(\"insert\") { for uint::range(0, n_keys) |i| { map.insert(dist[i], i + 1); } } { let mut map = LinearMap::new(); do timed(&mut results.random_ints) { for uint::range(0, num_keys) |i| { map.insert(rng.next() as uint, i); } do timed(\"search\") { for uint::range(0, n_keys) |i| { fail_unless!(map.find(&dist[i]).unwrap() == &(i + 1)); } } { let mut map = LinearMap::new(); for uint::range(0, num_keys) |i| { map.insert(i, i);; } do timed(&mut results.delete_ints) { for uint::range(0, num_keys) |i| { fail_unless!(map.remove(&i)); } do timed(\"remove\") { for uint::range(0, n_keys) |i| { fail_unless!(map.remove(&dist[i])); } } } fn linear_str_benchmarks(rng: @rand::Rng, num_keys: uint, results: &mut Results) { { let mut map = LinearMap::new(); do timed(&mut results.sequential_strings) { for uint::range(0, num_keys) |i| { let s = uint::to_str(i); map.insert(s, i); } for uint::range(0, num_keys) |i| { let s = uint::to_str(i); fail_unless!(map.find(&s).unwrap() == &i); } fn main() { let args = os::args(); let n_keys = { if args.len() == 2 { uint::from_str(args[1]).get() } else { 1000000 } } }; { let mut map = LinearMap::new(); do timed(&mut results.random_strings) { for uint::range(0, num_keys) |i| { let s = uint::to_str(rng.next() as uint); map.insert(s, i); } } } let mut rand = vec::with_capacity(n_keys); { let mut map = LinearMap::new(); for uint::range(0, num_keys) |i| { map.insert(uint::to_str(i), i); } do timed(&mut results.delete_strings) { for uint::range(0, num_keys) |i| { fail_unless!(map.remove(&uint::to_str(i))); let rng = core::rand::seeded_rng([1, 1, 1, 1, 1, 1, 1]); let mut set = LinearSet::new(); while set.len() != n_keys { let next = rng.next() as uint; if set.insert(next) { rand.push(next); } } } } fn tree_int_benchmarks(rng: @rand::Rng, num_keys: uint, results: &mut Results) { { let mut map = TreeMap::new(); do timed(&mut results.sequential_ints) { for uint::range(0, num_keys) |i| { map.insert(i, i+1); } io::println(fmt!(\"%? keys\", n_keys)); for uint::range(0, num_keys) |i| { fail_unless!(map.find(&i).unwrap() == &(i+1)); } } } io::println(\"nTreeMap:\"); { let mut map = TreeMap::new(); do timed(&mut results.random_ints) { for uint::range(0, num_keys) |i| { map.insert(rng.next() as uint, i); } } let mut map = TreeMap::new:: let mut map = TreeMap::new(); for uint::range(0, num_keys) |i| { map.insert(i, i);; } do timed(&mut results.delete_ints) { for uint::range(0, num_keys) |i| { fail_unless!(map.remove(&i)); } } let mut map = TreeMap::new:: } fn tree_str_benchmarks(rng: @rand::Rng, num_keys: uint, results: &mut Results) { { let mut map = TreeMap::new(); do timed(&mut results.sequential_strings) { for uint::range(0, num_keys) |i| { let s = uint::to_str(i); map.insert(s, i); } for uint::range(0, num_keys) |i| { let s = uint::to_str(i); fail_unless!(map.find(&s).unwrap() == &i); } } io::println(\" Random integers:\"); let mut map = TreeMap::new:: let mut map = TreeMap::new(); do timed(&mut results.random_strings) { for uint::range(0, num_keys) |i| { let s = uint::to_str(rng.next() as uint); map.insert(s, i); } } let mut map = LinearMap::new:: let mut map = TreeMap::new(); for uint::range(0, num_keys) |i| { map.insert(uint::to_str(i), i); } do timed(&mut results.delete_strings) { for uint::range(0, num_keys) |i| { fail_unless!(map.remove(&uint::to_str(i))); } } let mut map = LinearMap::new:: } fn write_header(header: &str) { io::stdout().write_str(header); io::stdout().write_str(\"n\"); } fn write_row(label: &str, value: float) { io::stdout().write_str(fmt!(\"%30s %f sn\", label, value)); } fn write_results(label: &str, results: &Results) { write_header(label); write_row(\"sequential_ints\", results.sequential_ints); write_row(\"random_ints\", results.random_ints); write_row(\"delete_ints\", results.delete_ints); write_row(\"sequential_strings\", results.sequential_strings); write_row(\"random_strings\", results.random_strings); write_row(\"delete_strings\", results.delete_strings); } fn empty_results() -> Results { Results { sequential_ints: 0f, random_ints: 0f, delete_ints: 0f, sequential_strings: 0f, random_strings: 0f, delete_strings: 0f, { io::println(\" Random integers:\"); let mut map = LinearMap::new:: } fn main() { let args = os::args(); let num_keys = { if args.len() == 2 { uint::from_str(args[1]).get() } else { 100 // woefully inadequate for any real measurement } }; let seed = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]; io::println(\"nTrieMap:\"); { let rng = rand::seeded_rng(seed); let mut results = empty_results(); old_int_benchmarks(rng, num_keys, &mut results); old_str_benchmarks(rng, num_keys, &mut results); write_results(\"std::oldmap::HashMap\", &results); let mut map = TrieMap::new:: let rng = rand::seeded_rng(seed); let mut results = empty_results(); linear_int_benchmarks(rng, num_keys, &mut results); linear_str_benchmarks(rng, num_keys, &mut results); write_results(\"core::hashmap::linear::LinearMap\", &results); let mut map = TrieMap::new:: let rng = rand::seeded_rng(seed); let mut results = empty_results(); tree_int_benchmarks(rng, num_keys, &mut results); tree_str_benchmarks(rng, num_keys, &mut results); write_results(\"std::treemap::TreeMap\", &results); io::println(\" Random integers:\"); let mut map = TrieMap::new:: /// #![feature(box_leak)] /// /// fn main() { /// let x = Box::new(41); /// let static_ref: &'static mut usize = Box::leak(x);", "commid": "rust_pr_48110"}], "negative_passages": []}
{"query_id": "q-en-rust-048e8bc0a7141491aa38a90cc2a49477bac51c738aeed492218a9d2c99d11a32", "query": "Implemented in .\nDumping this link here for future reference:\nClosing since is stable in nightly and will make its way to stable in 1.26.0.\nI was confused why this was allowed to stabilize, but it appears that was fixed?\nThe mechanism doesn't work anymore at least. Should we escalate this to a new issue with P-high?\nNope, just leaving a historical note for anyone else who was confused. Although if anyone knows when this was fixed (and if a test was ?) that would be good to note.", "positive_passages": [{"docid": "doc-en-rust-8249ce9d6c380064acf32b954380f5137c30349f016c3fdc5b8e433305b88ee7", "text": "/// Unsized data: /// /// ``` /// #![feature(box_leak)] /// /// fn main() { /// let x = vec![1, 2, 3].into_boxed_slice(); /// let static_ref = Box::leak(x);", "commid": "rust_pr_48110"}], "negative_passages": []}
{"query_id": "q-en-rust-048e8bc0a7141491aa38a90cc2a49477bac51c738aeed492218a9d2c99d11a32", "query": "Implemented in .\nDumping this link here for future reference:\nClosing since is stable in nightly and will make its way to stable in 1.26.0.\nI was confused why this was allowed to stabilize, but it appears that was fixed?\nThe mechanism doesn't work anymore at least. Should we escalate this to a new issue with P-high?\nNope, just leaving a historical note for anyone else who was confused. Although if anyone knows when this was fixed (and if a test was ?) that would be good to note.", "positive_passages": [{"docid": "doc-en-rust-2d74c0eafb3963f8f8e1ee8446b8608ee2ec207c4dbb1dab916d500538b2bf32", "text": "/// assert_eq!(*static_ref, [4, 2, 3]); /// } /// ``` #[unstable(feature = \"box_leak\", reason = \"needs an FCP to stabilize\", issue = \"46179\")] #[stable(feature = \"box_leak\", since = \"1.26.0\")] #[inline] pub fn leak<'a>(b: Box let col = (col_lo.0 as u64) & 0xFF; let line = ((line_lo as u64) & 0xFF_FF_FF) << 8; let len = ((span.hi - span.lo).0 as u64) << 32; let line_col_len = col | line | len; Hash::hash(&line_col_len, hasher); // Hash both the length and the end location (line/column) of a span. If we // hash only the length, for example, then two otherwise equal spans with // different end locations will have the same hash. This can cause a problem // during incremental compilation wherein a previous result for a query that // depends on the end location of a span will be incorrectly reused when the // end location of the span it depends on has changed (see issue #74890). A // similar analysis applies if some query depends specifically on the length // of the span, but we only hash the end location. So hash both. let col_lo_trunc = (col_lo.0 as u64) & 0xFF; let line_lo_trunc = ((line_lo as u64) & 0xFF_FF_FF) << 8; let col_hi_trunc = (col_hi.0 as u64) & 0xFF << 32; let line_hi_trunc = ((line_hi as u64) & 0xFF_FF_FF) << 40; let col_line = col_lo_trunc | line_lo_trunc | col_hi_trunc | line_hi_trunc; let len = (span.hi - span.lo).0; Hash::hash(&col_line, hasher); Hash::hash(&len, hasher); span.ctxt.hash_stable(ctx, hasher); } }", "commid": "rust_pr_76256"}], "negative_passages": []}
{"query_id": "q-en-rust-eb3a5a04dc565f82620b41f30effb44b9a03962f5bb8b5c03e83f069f107b6f6", "query": "Index out of bounds ICE while compiling . I don't know enough about why this could happen to produce a minimal reproduction. Backtrace:\nhere's a test case. i got this ICE by random (un)luck and i can't remove any further code or it won't reproduce. at least it's a single file. cannot reproduce with rustc, must be called from cargo. Cargo toml has no dependencies tho\nat least i think its related, since its in CacheDecoder as well\ncannot reproduce after cargo clean, so attached is the entire project. linux x86_64 rustc 1.26.2 ( 2018-06-01)", "positive_passages": [{"docid": "doc-en-rust-44765efb3f3d67d33668bb98e690f398fa812059fa50e353eef1c76dc7e155a4", "text": " include ../../run-make-fulldeps/tools.mk # FIXME https://github.com/rust-lang/rust/issues/78911 # ignore-32bit wrong/no cross compiler and sometimes we pass wrong gcc args (-m64) # Tests that we don't ICE during incremental compilation after modifying a # function span such that its previous end line exceeds the number of lines # in the new file, but its start line/column and length remain the same. SRC=$(TMPDIR)/src INCR=$(TMPDIR)/incr all: mkdir $(SRC) mkdir $(INCR) cp a.rs $(SRC)/main.rs $(RUSTC) -C incremental=$(INCR) $(SRC)/main.rs cp b.rs $(SRC)/main.rs $(RUSTC) -C incremental=$(INCR) $(SRC)/main.rs ", "commid": "rust_pr_76256"}], "negative_passages": []}
{"query_id": "q-en-rust-eb3a5a04dc565f82620b41f30effb44b9a03962f5bb8b5c03e83f069f107b6f6", "query": "Index out of bounds ICE while compiling . I don't know enough about why this could happen to produce a minimal reproduction. Backtrace:\nhere's a test case. i got this ICE by random (un)luck and i can't remove any further code or it won't reproduce. at least it's a single file. cannot reproduce with rustc, must be called from cargo. Cargo toml has no dependencies tho\nat least i think its related, since its in CacheDecoder as well\ncannot reproduce after cargo clean, so attached is the entire project. linux x86_64 rustc 1.26.2 ( 2018-06-01)", "positive_passages": [{"docid": "doc-en-rust-2dd9dafe076fbf295bc70c7f860aa6e3ed726228d3f4d85520467c716799d72e", "text": " fn main() { // foo must be used. foo(); } // For this test to operate correctly, foo's body must start on exactly the same // line and column and have the exact same length in bytes in a.rs and b.rs. In // a.rs, the body must end on a line number which does not exist in b.rs. // Basically, avoid modifying this file, including adding or removing whitespace! fn foo() { assert_eq!(1, 1); } ", "commid": "rust_pr_76256"}], "negative_passages": []}
{"query_id": "q-en-rust-eb3a5a04dc565f82620b41f30effb44b9a03962f5bb8b5c03e83f069f107b6f6", "query": "Index out of bounds ICE while compiling . I don't know enough about why this could happen to produce a minimal reproduction. Backtrace:\nhere's a test case. i got this ICE by random (un)luck and i can't remove any further code or it won't reproduce. at least it's a single file. cannot reproduce with rustc, must be called from cargo. Cargo toml has no dependencies tho\nat least i think its related, since its in CacheDecoder as well\ncannot reproduce after cargo clean, so attached is the entire project. linux x86_64 rustc 1.26.2 ( 2018-06-01)", "positive_passages": [{"docid": "doc-en-rust-ca230d5e18bba74e60b1cdfff1714f362c7735739b7fce812a2052404d595c44", "text": " fn main() { // foo must be used. foo(); } // For this test to operate correctly, foo's body must start on exactly the same // line and column and have the exact same length in bytes in a.rs and b.rs. In // a.rs, the body must end on a line number which does not exist in b.rs. // Basically, avoid modifying this file, including adding or removing whitespace! fn foo() { assert_eq!(1, 1);//// } ", "commid": "rust_pr_76256"}], "negative_passages": []}
{"query_id": "q-en-rust-eb3a5a04dc565f82620b41f30effb44b9a03962f5bb8b5c03e83f069f107b6f6", "query": "Index out of bounds ICE while compiling . I don't know enough about why this could happen to produce a minimal reproduction. Backtrace:\nhere's a test case. i got this ICE by random (un)luck and i can't remove any further code or it won't reproduce. at least it's a single file. cannot reproduce with rustc, must be called from cargo. Cargo toml has no dependencies tho\nat least i think its related, since its in CacheDecoder as well\ncannot reproduce after cargo clean, so attached is the entire project. linux x86_64 rustc 1.26.2 ( 2018-06-01)", "positive_passages": [{"docid": "doc-en-rust-fa86e930fb387826ec73795c6b563d7f243349db35a973d5ee234aa8d9960cda", "text": "include ../../run-make-fulldeps/tools.mk # FIXME https://github.com/rust-lang/rust/issues/78911 # ignore-32bit wrong/no cross compiler and sometimes we pass wrong gcc args (-m64) all: foo", "commid": "rust_pr_76256"}], "negative_passages": []}
{"query_id": "q-en-rust-66bd0557e3adcbd9bc2e0338b5bc767c87f0007e5e61dd72343927c908c77916", "query": "this code currently emmits Could we detect explicitly if there are 2 unchained (via && or ) \"variables\" after an \"if\" and if the first one is \"not\" and explicitly suggest using \"!\" instead of \"not\" instead? meta: `", "positive_passages": [{"docid": "doc-en-rust-d26cc4e78fb8a3c07b6c09839d3455856b5189a06f5206dd567e636412a55aa1", "text": "let (span, e) = self.interpolated_or_expr_span(e)?; (lo.to(span), ExprKind::Box(e)) } _ => return self.parse_dot_or_call_expr(Some(attrs)) token::Ident(..) if self.token.is_ident_named(\"not\") => { // `not` is just an ordinary identifier in Rust-the-language, // but as `rustc`-the-compiler, we can issue clever diagnostics // for confused users who really want to say `!` let token_cannot_continue_expr = |t: &token::Token| match *t { // These tokens can start an expression after `!`, but // can't continue an expression after an ident token::Ident(ident, is_raw) => token::ident_can_begin_expr(ident, is_raw), token::Literal(..) | token::Pound => true, token::Interpolated(ref nt) => match nt.0 { token::NtIdent(..) | token::NtExpr(..) | token::NtBlock(..) | token::NtPath(..) => true, _ => false, }, _ => false }; let cannot_continue_expr = self.look_ahead(1, token_cannot_continue_expr); if cannot_continue_expr { self.bump(); // Emit the error ... let mut err = self.diagnostic() .struct_span_err(self.span, &format!(\"unexpected {} after identifier\", self.this_token_descr())); // span the `not` plus trailing whitespace to avoid // trailing whitespace after the `!` in our suggestion let to_replace = self.sess.codemap() .span_until_non_whitespace(lo.to(self.span)); err.span_suggestion_short(to_replace, \"use `!` to perform logical negation\", \"!\".to_owned()); err.emit(); // \u2014and recover! (just as if we were in the block // for the `token::Not` arm) let e = self.parse_prefix_expr(None); let (span, e) = self.interpolated_or_expr_span(e)?; (lo.to(span), self.mk_unary(UnOp::Not, e)) } else { return self.parse_dot_or_call_expr(Some(attrs)); } } _ => { return self.parse_dot_or_call_expr(Some(attrs)); } }; return Ok(self.mk_expr(lo.to(hi), ex, attrs)); }", "commid": "rust_pr_49258"}], "negative_passages": []}
{"query_id": "q-en-rust-66bd0557e3adcbd9bc2e0338b5bc767c87f0007e5e61dd72343927c908c77916", "query": "this code currently emmits Could we detect explicitly if there are 2 unchained (via && or ) \"variables\" after an \"if\" and if the first one is \"not\" and explicitly suggest using \"!\" instead of \"not\" instead? meta: `", "positive_passages": [{"docid": "doc-en-rust-9624e351cf4bad0520ea726b7c62ecbe9bd282479f3cd7913a1c02a042f1910f", "text": "// Which is valid in other languages, but not Rust. match self.parse_stmt_without_recovery(false) { Ok(Some(stmt)) => { if self.look_ahead(1, |t| t == &token::OpenDelim(token::Brace)) { // if the next token is an open brace (e.g., `if a b {`), the place- // inside-a-block suggestion would be more likely wrong than right return Err(e); } let mut stmt_span = stmt.span; // expand the span to include the semicolon, if it exists if self.eat(&token::Semi) {", "commid": "rust_pr_49258"}], "negative_passages": []}
{"query_id": "q-en-rust-66bd0557e3adcbd9bc2e0338b5bc767c87f0007e5e61dd72343927c908c77916", "query": "this code currently emmits Could we detect explicitly if there are 2 unchained (via && or ) \"variables\" after an \"if\" and if the first one is \"not\" and explicitly suggest using \"!\" instead of \"not\" instead? meta: `", "positive_passages": [{"docid": "doc-en-rust-7afa98cf0c3fc68fbc297f2ddd7842c43f00a33bcc0b8688c2ab41f17953f0fe", "text": "} } fn ident_can_begin_expr(ident: ast::Ident, is_raw: bool) -> bool { pub(crate) fn ident_can_begin_expr(ident: ast::Ident, is_raw: bool) -> bool { let ident_token: Token = Ident(ident, is_raw); !ident_token.is_reserved_ident() ||", "commid": "rust_pr_49258"}], "negative_passages": []}
{"query_id": "q-en-rust-66bd0557e3adcbd9bc2e0338b5bc767c87f0007e5e61dd72343927c908c77916", "query": "this code currently emmits Could we detect explicitly if there are 2 unchained (via && or ) \"variables\" after an \"if\" and if the first one is \"not\" and explicitly suggest using \"!\" instead of \"not\" instead? meta: `", "positive_passages": [{"docid": "doc-en-rust-fa4c0741eadf067ab39d78425020e16d8ef5bb7d52ad34532a14b5eb5ebf9368", "text": "self.lifetime().is_some() } /// Returns `true` if the token is a identifier whose name is the given /// string slice. pub fn is_ident_named(&self, name: &str) -> bool { match self.ident() { Some((ident, _)) => ident.name.as_str() == name, None => false } } /// Returns `true` if the token is a documentation comment. pub fn is_doc_comment(&self) -> bool { match *self {", "commid": "rust_pr_49258"}], "negative_passages": []}
{"query_id": "q-en-rust-66bd0557e3adcbd9bc2e0338b5bc767c87f0007e5e61dd72343927c908c77916", "query": "this code currently emmits Could we detect explicitly if there are 2 unchained (via && or ) \"variables\" after an \"if\" and if the first one is \"not\" and explicitly suggest using \"!\" instead of \"not\" instead? meta: `", "positive_passages": [{"docid": "doc-en-rust-c1cf2c57629d64b5a14d8d63f5653bd0435a0a3d7741462ff686862e428fd799", "text": " // Copyright 2018 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0 # ignore-macos (rust-lang/rust#66568) # Objects are reproducible but their path is not. all: ", "commid": "rust_pr_71931"}], "negative_passages": []}
{"query_id": "q-en-rust-b652b2986b58acc1b404a614070114b68db4134dd82737c46ecfa844323d3f96", "query": "This is an issue extracted from which is caused by an issue that any OSX compilation with debuginfo ends up being nondeterministic. Specifically (currently known at least) the source of nondeterminism is that an mtime for an object file winds up in the final binary. It turns out this isn't really our fault (unfortunately that makes it harder to fix!). This can be reproduced with just C and a linker: Here we're using the exact same object file (with two timestamps) and we're seeing different linked artifacts. This is a source of bugs in programs that expect rustc to be deterministic (aka as was originally stated) and is something that we as rustc should probably fix. Unfortunately I don't really know of a fix for this myself. I'd be tempted to take a big hammer to the problem and deterministically set all mtime fields for objects going into the linker to a known fixed value, but that unfortunately doesn't fix the determinism for C code (whose objects we don't control) and also is probably too big of a hammer (if a build system uses the mtime of the object to control rebuilds it'd get mixed up). We could also use something like and reach in to the specific field and remove the actual data. I found it in a symbol section with the type (described in various documents online too apparently). We may be able to postprocess all output artifacts on OSX to maybe just zero out these fields unconditionally (or set them to something like May 15, 2015), although I'm not actually sure if this would be easy to do.\ncc cc (you're probably interested in this for the sccache ramifications like is)\nIf we used LLD for linking (), it would be possible to fix this in the linker (by providing a flag to ignore mtime). This would also fix (this part of) deterministic compilation for other languages as well.\nSource code for the darwin linker , but I have no idea whether they take patches. Maybe LLVM develpers know more. Most likely, Apple will switch to LLD eventually.\nI wonder what experts on deterministic builds ( ) can say about this.\nHm. I wonder why we haven't noticed this for Firefox builds? Maybe the version of the linker we're using has a patch to work around this? We're using for our builds.\nthat is indeed surprising! The source code there , but that may be getting postprocessed somewhere else perhaps.\nWe discussed this on IRC, and I suspect the reason is that nobody has actually tried to do unstripped reproducible Firefox builds for macOS (although I'm not 100% sure). The info in question are STABS entries used by dsymutil to link the debug info from the object files into the dSYM. This isn't critical for sccache currently, since it doesn't cache linker outputs.\nRelated: noticed that static archives are not reproducible on macOS because Apple's tool puts timestamps in the archive ().\nRight, I think the impact on sccache is similar to What I am seeing is that the .dylib non-determinism is causing unexpected cache misses in sccache since the dylibs end up being passed to rustc. Here is an example for :\nAh, right, proc macro crates!\nThis should be fixable by setting the in the env of the linker (libtool and ld64). For more context, see determinism blog post and commit for Chromium. I'm not sure of the first Xcode version that supports this though.\nIt should be harmless to set it even if it's not supported, you just won't get a deterministic binary. in the most recent ld64 source available, FWIW.", "positive_passages": [{"docid": "doc-en-rust-82e04b7a5a03d5180740161a60dacaa1cb036288f719ec2c57f0e9f5952973b0", "text": "rm -rf $(TMPDIR) && mkdir $(TMPDIR) $(RUSTC) reproducible-build-aux.rs $(RUSTC) reproducible-build.rs --crate-type rlib --sysroot $(shell $(RUSTC) --print sysroot) --remap-path-prefix=$(shell $(RUSTC) --print sysroot)=/sysroot cp -r $(shell $(RUSTC) --print sysroot) $(TMPDIR)/sysroot cp -R $(shell $(RUSTC) --print sysroot) $(TMPDIR)/sysroot cp $(TMPDIR)/libreproducible_build.rlib $(TMPDIR)/libfoo.rlib $(RUSTC) reproducible-build.rs --crate-type rlib --sysroot $(TMPDIR)/sysroot --remap-path-prefix=$(TMPDIR)/sysroot=/sysroot cmp \"$(TMPDIR)/libreproducible_build.rlib\" \"$(TMPDIR)/libfoo.rlib\" || exit 1", "commid": "rust_pr_71931"}], "negative_passages": []}
{"query_id": "q-en-rust-7588c1e3095f0508576ed42a62374fae3979416b2680ee03b763f2cb86ed355a", "query": "If a procedural macro generates an access to a named struct field like then the may be either defsite or callsite and it works either way. But if accessing an unnamed tuple struct field like then it only works if is call_site. I believe it should work either way in either case.\nSorry for the delay! Fixed in .", "positive_passages": [{"docid": "doc-en-rust-e1bac876801e44f8f11ade170aa42df1269990b4eed86124b80bfdc0a70eeee1", "text": "// A tuple index may not have a suffix self.expect_no_suffix(sp, \"tuple index\", suf); let dot_span = self.prev_span; hi = self.span; let idx_span = self.span; self.bump(); let invalid_msg = \"invalid tuple or struct index\";", "commid": "rust_pr_48083"}], "negative_passages": []}
{"query_id": "q-en-rust-7588c1e3095f0508576ed42a62374fae3979416b2680ee03b763f2cb86ed355a", "query": "If a procedural macro generates an access to a named struct field like then the may be either defsite or callsite and it works either way. But if accessing an unnamed tuple struct field like then it only works if is call_site. I believe it should work either way in either case.\nSorry for the delay! Fixed in .", "positive_passages": [{"docid": "doc-en-rust-081ea6a2bd57e3b98e77a3c39181b981b1e25f22fa48473718c6fd3de07d75bc", "text": "n.to_string()); err.emit(); } let id = respan(dot_span.to(hi), n); let field = self.mk_tup_field(e, id); e = self.mk_expr(lo.to(hi), field, ThinVec::new()); let field = self.mk_tup_field(e, respan(idx_span, n)); e = self.mk_expr(lo.to(idx_span), field, ThinVec::new()); } None => { let prev_span = self.prev_span;", "commid": "rust_pr_48083"}], "negative_passages": []}
{"query_id": "q-en-rust-7588c1e3095f0508576ed42a62374fae3979416b2680ee03b763f2cb86ed355a", "query": "If a procedural macro generates an access to a named struct field like then the may be either defsite or callsite and it works either way. But if accessing an unnamed tuple struct field like then it only works if is call_site. I believe it should work either way in either case.\nSorry for the delay! Fixed in .", "positive_passages": [{"docid": "doc-en-rust-5220983bc183be2aa1cf1b5245052153651e550fc14c338fb92f401c6054f982", "text": " // Copyright 2017 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0 let tcx = self.cx.infcx.tcx; let mut types = vec![(dropped_ty, 0)]; let mut known = FxHashSet(); while let Some((ty, depth)) = types.pop() { let span = DUMMY_SP; // FIXME let result = match tcx.dtorck_constraint_for_ty(span, dropped_ty, depth, ty) { Ok(result) => result, Err(ErrorReported) => { continue; } }; let ty::DtorckConstraint { outlives, dtorck_types, } = result; // All things in the `outlives` array may be touched by // the destructor and must be live at this point. for outlive in outlives { let cause = Cause::DropVar(dropped_local, location); self.push_type_live_constraint(outlive, location, cause); } // If we end visiting the same type twice (usually due to a cycle involving // associated types), we need to ensure that its region types match up with the type // we added to the 'known' map the first time around. For this reason, we need // our infcx to hold onto its calculated region constraints after each call // to dtorck_constraint_for_ty. Otherwise, normalizing the corresponding associated // type will end up instantiating the type with a new set of inference variables // Since this new type will never be in 'known', we end up looping forever. // // For this reason, we avoid calling TypeChecker.normalize, instead doing all normalization // ourselves in one large 'fully_perform_op' callback. let (type_constraints, kind_constraints) = self.cx.fully_perform_op(location.at_self(), |cx| { let tcx = cx.infcx.tcx; let mut selcx = traits::SelectionContext::new(cx.infcx); let cause = cx.misc(cx.last_span); let mut types = vec![(dropped_ty, 0)]; let mut final_obligations = Vec::new(); let mut type_constraints = Vec::new(); let mut kind_constraints = Vec::new(); // However, there may also be some types that // `dtorck_constraint_for_ty` could not resolve (e.g., // associated types and parameters). We need to normalize // associated types here and possibly recursively process. for ty in dtorck_types { let ty = self.cx.normalize(&ty, location); let ty = self.cx.infcx.resolve_type_and_region_vars_if_possible(&ty); match ty.sty { ty::TyParam(..) | ty::TyProjection(..) | ty::TyAnon(..) => { let cause = Cause::DropVar(dropped_local, location); self.push_type_live_constraint(ty, location, cause); let mut known = FxHashSet(); while let Some((ty, depth)) = types.pop() { let span = DUMMY_SP; // FIXME let result = match tcx.dtorck_constraint_for_ty(span, dropped_ty, depth, ty) { Ok(result) => result, Err(ErrorReported) => { continue; } }; let ty::DtorckConstraint { outlives, dtorck_types, } = result; // All things in the `outlives` array may be touched by // the destructor and must be live at this point. for outlive in outlives { let cause = Cause::DropVar(dropped_local, location); kind_constraints.push((outlive, location, cause)); } _ => if known.insert(ty) { types.push((ty, depth + 1)); }, // However, there may also be some types that // `dtorck_constraint_for_ty` could not resolve (e.g., // associated types and parameters). We need to normalize // associated types here and possibly recursively process. for ty in dtorck_types { let traits::Normalized { value: ty, obligations } = traits::normalize(&mut selcx, cx.param_env, cause.clone(), &ty); final_obligations.extend(obligations); let ty = cx.infcx.resolve_type_and_region_vars_if_possible(&ty); match ty.sty { ty::TyParam(..) | ty::TyProjection(..) | ty::TyAnon(..) => { let cause = Cause::DropVar(dropped_local, location); type_constraints.push((ty, location, cause)); } _ => if known.insert(ty) { types.push((ty, depth + 1)); }, } } } Ok(InferOk { value: (type_constraints, kind_constraints), obligations: final_obligations }) }).unwrap(); for (ty, location, cause) in type_constraints { self.push_type_live_constraint(ty, location, cause); } for (kind, location, cause) in kind_constraints { self.push_type_live_constraint(kind, location, cause); } } }", "commid": "rust_pr_47920"}], "negative_passages": []}
{"query_id": "q-en-rust-f6f45c329194f28dd2333fc1b6893cd181117e4a9188c96da9a46fd20eed02cb", "query": "I'd like to work on this.\nThis appears to be a regular error, not an ICE.", "positive_passages": [{"docid": "doc-en-rust-9334535e2f7df9a81d012b03120511124fb3646a69d18f6806248531d859a1b6", "text": " // Copyright 2018 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0 unsafe { rust_inc_kernel_live_count(); } unsafe { rust_dec_kernel_live_count(); } do fn&() { let shutdown_port = swap_unwrap(&mut *shutdown_port); f(shutdown_port) }.finally || { unsafe { rust_dec_kernel_live_count(); } unsafe { rust_inc_kernel_live_count(); } // Service my have already exited service.send(UnregisterWeakTask(task)); }", "commid": "rust_pr_4858"}], "negative_passages": []}
{"query_id": "q-en-rust-535540629c679811d8d504acc5a06b7392f2345682a2b4727bf356513c9fb850", "query": "/ and / do the same thing.\nSo, should registertask/unregistertask then be exposed in rust_builtin so it can be called from Rust?\nYes, though I suggest renaming it, since it will no longer just be used for 'registering', nor does it strictly have anything to do with tasks or weak tasks from the kernel's perspective - this count is only for determining when to shutdown the runtime. Maybe for the method name and for the builtin. It's not great either, but is less wed to implementation details.", "positive_passages": [{"docid": "doc-en-rust-9fea85231ddd20a247ef2af9b8985073e5d7c24d60a875b948c4cb4a7ad6ac54", "text": "let port = swap_unwrap(&mut *port); // The weak task service is itself a weak task debug!(\"weakening the weak service task\"); unsafe { rust_inc_kernel_live_count(); } unsafe { rust_dec_kernel_live_count(); } run_weak_task_service(port); }.finally { debug!(\"unweakening the weak service task\"); unsafe { rust_dec_kernel_live_count(); } unsafe { rust_inc_kernel_live_count(); } } }", "commid": "rust_pr_4858"}], "negative_passages": []}
{"query_id": "q-en-rust-c9272f22dbff022b76cc904a2e597f2aa2588cb65afe57d99a14b694adccd85a", "query": "Do you have a testcase that reproduces this problem? Also, what version of rustc are you using?\nI'm sorry late for your comment. $ rustc this is bad my code. : : rustc 0.6 ( 2013-01-20 20:35:24 -0800)\nFrom my cursory glance, multibyte characters (Hangul in this case) seem to cause this bug.\nNot critical for 0.6; removing milestone\nSeems to be fixed. Checking in a test case.\nDone pending\nTest case in , closing", "positive_passages": [{"docid": "doc-en-rust-7c79db9fd3ee1476cadff9b8510e982305de351b4d19e872415fec81b21b110d", "text": " // Copyright 2012 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0 /// Fills the internal buffer of this object, returning the buffer contents. /// Returns the contents of the internal buffer, filling it with more data /// from the inner reader if it is empty. /// /// This function is a lower-level call. It needs to be paired with the /// [`consume`] method to function properly. When calling this", "commid": "rust_pr_54046"}], "negative_passages": []}
{"query_id": "q-en-rust-cccb20dd5e45ba2c8a0dcee70480ba8cadef6bbaee0f670525e5aa1d3789ef5d", "query": "This is a tracking issue for the RFC \"Type privacy and private-in-public lints \" (rust-lang/rfcs). Steps: [ ] Implement the RFC (cc ) [ ] Adjust documentation ([see instructions on forge][doc-guide]) [ ] Stabilization PR ([see instructions on forge][stabilization-guide]) [stabilization-guide]: [doc-guide]: Unresolved questions: [ ] It's not fully clear if the restriction for associated type definitions required for type privacy soundness, or it's just a workaround for a technical difficulty. [ ] Interactions between macros 2.0 and the notions of reachability / effective visibility used for the lints are unclear. $DIR/feature-gate-type_privacy_lints.rs:3:1 | LL | #![warn(unnameable_types)] | ^^^^^^^^^^^^^^^^^^^^^^^^^^ | = note: the `unnameable_types` lint is unstable = note: see issue #48054 --> $DIR/unnameable_types.rs:2:9 --> $DIR/unnameable_types.rs:1:9 | LL | #![deny(unnameable_types)] | ^^^^^^^^^^^^^^^^ error: enum `PubE` is reachable but cannot be named --> $DIR/unnameable_types.rs:7:5 --> $DIR/unnameable_types.rs:6:5 | LL | pub enum PubE { | ^^^^^^^^^^^^^ reachable at visibility `pub`, but can only be named at visibility `pub(crate)` error: trait `PubTr` is reachable but cannot be named --> $DIR/unnameable_types.rs:11:5 --> $DIR/unnameable_types.rs:10:5 | LL | pub trait PubTr { | ^^^^^^^^^^^^^^^ reachable at visibility `pub`, but can only be named at visibility `pub(crate)`", "commid": "rust_pr_120144"}], "negative_passages": []}
{"query_id": "q-en-rust-2021d9dc7731cc45fb4f5f2c4a0ed5ada1f4422c3a3e4de3f3d2bafb3259d1d0", "query": "Document of and for unsingned integers take 2^4 ( is not xor but power) as examples. Because 2^4 = 4^2, these examples are not very helpful for those unfamiliar with math words in English and thus rely on example codes. We should take another example, e.g. 2^5 (which is not equal to 5^2).", "positive_passages": [{"docid": "doc-en-rust-a7ce838a7d130dc685d99b9059a299d91f8a580262377e39f18016badcf00305", "text": "``` \", $Feature, \"let x: \", stringify!($SelfT), \" = 2; // or any other integer type assert_eq!(x.pow(4), 16);\", assert_eq!(x.pow(5), 32);\", $EndFeature, \" ```\"), #[stable(feature = \"rust1\", since = \"1.0.0\")]", "commid": "rust_pr_48397"}], "negative_passages": []}
{"query_id": "q-en-rust-2021d9dc7731cc45fb4f5f2c4a0ed5ada1f4422c3a3e4de3f3d2bafb3259d1d0", "query": "Document of and for unsingned integers take 2^4 ( is not xor but power) as examples. Because 2^4 = 4^2, these examples are not very helpful for those unfamiliar with math words in English and thus rely on example codes. We should take another example, e.g. 2^5 (which is not equal to 5^2).", "positive_passages": [{"docid": "doc-en-rust-5cacde338521445d339ec3b94f430427e114ccce8e6a651c5b0d6a5cc3c6a68c", "text": "Basic usage: ``` \", $Feature, \"assert_eq!(2\", stringify!($SelfT), \".pow(4), 16);\", $EndFeature, \" \", $Feature, \"assert_eq!(2\", stringify!($SelfT), \".pow(5), 32);\", $EndFeature, \" ```\"), #[stable(feature = \"rust1\", since = \"1.0.0\")] #[inline]", "commid": "rust_pr_48397"}], "negative_passages": []}
{"query_id": "q-en-rust-2f71a7a4df60a8ef522a8c924484ad7d4061c88f4933572142d5c21fcf38cc97", "query": "One of the easiest ways to make CI faster is to make things parallel and simply use the hardware we have available to us. Unfortunately though we don't have a lot of data about how parallel our build is. Are there steps we think are parallel but actually aren't? Are we pegged to one core for long durations when there's other work we could be doing? The general idea here is that we'd spin up a daemon at the very start of the build which would sample CPU utilization every so often. This daemon would then update a file that's either displayed or uploaded at the end of the build. Hopefully we could then use these logs to get a better view into how the builders are working during the build, diagnose non-parallel portions of the build, and implement fixes to use all the cpus we've got. cc\nOn Windows this can be done by taking advantage of job objects. If the entire build is wrapped in a job object then we can call with to get a bunch of useful data.\nI made script that will print output into the travis log every 30 seconds , . Some findings: Cloning submodules jemalloc, libcompilerbuildtins and liblibc alone takes 30 seconds. While building bootstrap, compiling serdederive, serdejson and bootstrap crates seems to take 30 seconds (total build time: 47 seconds). stage0: Compiling tidy crate seems to take around 30 seconds. Compiling rustcerrors takes at least 2 minutes, only one codegen-unit is used Compiling syntaxext takes 9 minutes, only one CGU used stage0 codegen artifacts: Compiling rustcllvm takes 1,5 minutes, one CGU During stage1, rustcerrors and syntaxext builds are approximately as slow as during stage0, rustc_plugins 2 minutes, one CGU. stage2: rustdoc took 2 minutes to build, one CGU compiletest suite=run-make mode=run-make: It looks like there is a single test that takes around 3 minutes to complete and has no parallelization. Testing alloc stage1: building liballoc takes around a minute Testing syntax stage1: building syntax takes 1.5 minutes, one CGU Notes: When the load average dropped towards 1, I assumed only one codegen unit was active. The script was only applied to the default pullrequest travis-ci configuration.\nAs shown in , the CPUs assigned to each job may have some performance difference: Intel(R) Xeon(R) CPU E5-2630 v3 @ 2.40GHz Intel(R) Xeon(R) CPU E5-2680 v3 @ 2.50GHz The clock-rate 2.4 GHz vs 2.5 GHz shouldn't make any noticeable difference though (this would at most slow down by 7.2 minutes out of 3 hours if everything is CPU-bound). It is not enough to explain the timeout in .\nI was working on recently for this where it periodically prints out the CPU usage as a percentage for the whole system (aka 1 core on a 4 core machine is 25%). I only got Linux/OSX working though and was unable to figure out a good way to do it on Windows. My thinking for how we'd do this is probably download a binary near the beginning of the build (or set up some script). We'd then run just before we run . That way we could correlate the two timestamps of each log (the main log and the to similar moments in time. Initially I was also thinking we'd just at the end of the build and scrape it later if need be.", "positive_passages": [{"docid": "doc-en-rust-f6e6ae8e17afafac5d9c10044c92753fce1f0ff74a599a2e7442ad4a9ed1b67e", "text": "- checkout: self fetchDepth: 2 # Spawn a background process to collect CPU usage statistics which we'll upload # at the end of the build. See the comments in the script here for more # information. - bash: python src/ci/cpu-usage-over-time.py &> cpu-usage.csv & displayName: \"Collect CPU-usage statistics in the background\" - bash: printenv | sort displayName: Show environment variables", "commid": "rust_pr_61632"}], "negative_passages": []}
{"query_id": "q-en-rust-2f71a7a4df60a8ef522a8c924484ad7d4061c88f4933572142d5c21fcf38cc97", "query": "One of the easiest ways to make CI faster is to make things parallel and simply use the hardware we have available to us. Unfortunately though we don't have a lot of data about how parallel our build is. Are there steps we think are parallel but actually aren't? Are we pegged to one core for long durations when there's other work we could be doing? The general idea here is that we'd spin up a daemon at the very start of the build which would sample CPU utilization every so often. This daemon would then update a file that's either displayed or uploaded at the end of the build. Hopefully we could then use these logs to get a better view into how the builders are working during the build, diagnose non-parallel portions of the build, and implement fixes to use all the cpus we've got. cc\nOn Windows this can be done by taking advantage of job objects. If the entire build is wrapped in a job object then we can call with to get a bunch of useful data.\nI made script that will print output into the travis log every 30 seconds , . Some findings: Cloning submodules jemalloc, libcompilerbuildtins and liblibc alone takes 30 seconds. While building bootstrap, compiling serdederive, serdejson and bootstrap crates seems to take 30 seconds (total build time: 47 seconds). stage0: Compiling tidy crate seems to take around 30 seconds. Compiling rustcerrors takes at least 2 minutes, only one codegen-unit is used Compiling syntaxext takes 9 minutes, only one CGU used stage0 codegen artifacts: Compiling rustcllvm takes 1,5 minutes, one CGU During stage1, rustcerrors and syntaxext builds are approximately as slow as during stage0, rustc_plugins 2 minutes, one CGU. stage2: rustdoc took 2 minutes to build, one CGU compiletest suite=run-make mode=run-make: It looks like there is a single test that takes around 3 minutes to complete and has no parallelization. Testing alloc stage1: building liballoc takes around a minute Testing syntax stage1: building syntax takes 1.5 minutes, one CGU Notes: When the load average dropped towards 1, I assumed only one codegen unit was active. The script was only applied to the default pullrequest travis-ci configuration.\nAs shown in , the CPUs assigned to each job may have some performance difference: Intel(R) Xeon(R) CPU E5-2630 v3 @ 2.40GHz Intel(R) Xeon(R) CPU E5-2680 v3 @ 2.50GHz The clock-rate 2.4 GHz vs 2.5 GHz shouldn't make any noticeable difference though (this would at most slow down by 7.2 minutes out of 3 hours if everything is CPU-bound). It is not enough to explain the timeout in .\nI was working on recently for this where it periodically prints out the CPU usage as a percentage for the whole system (aka 1 core on a 4 core machine is 25%). I only got Linux/OSX working though and was unable to figure out a good way to do it on Windows. My thinking for how we'd do this is probably download a binary near the beginning of the build (or set up some script). We'd then run just before we run . That way we could correlate the two timestamps of each log (the main log and the to similar moments in time. Initially I was also thinking we'd just at the end of the build and scrape it later if need be.", "positive_passages": [{"docid": "doc-en-rust-cf5e7ac7ec96d14e6e0c19a2169041e34ae8e7847a421f893faffa08161eb55c", "text": "AWS_SECRET_ACCESS_KEY: $(AWS_SECRET_ACCESS_KEY) condition: and(succeeded(), or(eq(variables.DEPLOY, '1'), eq(variables.DEPLOY_ALT, '1'))) displayName: Upload artifacts # Upload CPU usage statistics that we've been gathering this whole time. Always # execute this step in case we want to inspect failed builds, but don't let # errors here ever fail the build since this is just informational. - bash: aws s3 cp --acl public-read cpu-usage.csv s3://$DEPLOY_BUCKET/rustc-builds/$BUILD_SOURCEVERSION/cpu-$SYSTEM_JOBNAME.csv env: AWS_SECRET_ACCESS_KEY: $(AWS_SECRET_ACCESS_KEY) condition: contains(variables, 'AWS_SECRET_ACCESS_KEY') continueOnError: true displayName: Upload CPU usage statistics ", "commid": "rust_pr_61632"}], "negative_passages": []}
{"query_id": "q-en-rust-2f71a7a4df60a8ef522a8c924484ad7d4061c88f4933572142d5c21fcf38cc97", "query": "One of the easiest ways to make CI faster is to make things parallel and simply use the hardware we have available to us. Unfortunately though we don't have a lot of data about how parallel our build is. Are there steps we think are parallel but actually aren't? Are we pegged to one core for long durations when there's other work we could be doing? The general idea here is that we'd spin up a daemon at the very start of the build which would sample CPU utilization every so often. This daemon would then update a file that's either displayed or uploaded at the end of the build. Hopefully we could then use these logs to get a better view into how the builders are working during the build, diagnose non-parallel portions of the build, and implement fixes to use all the cpus we've got. cc\nOn Windows this can be done by taking advantage of job objects. If the entire build is wrapped in a job object then we can call with to get a bunch of useful data.\nI made script that will print output into the travis log every 30 seconds , . Some findings: Cloning submodules jemalloc, libcompilerbuildtins and liblibc alone takes 30 seconds. While building bootstrap, compiling serdederive, serdejson and bootstrap crates seems to take 30 seconds (total build time: 47 seconds). stage0: Compiling tidy crate seems to take around 30 seconds. Compiling rustcerrors takes at least 2 minutes, only one codegen-unit is used Compiling syntaxext takes 9 minutes, only one CGU used stage0 codegen artifacts: Compiling rustcllvm takes 1,5 minutes, one CGU During stage1, rustcerrors and syntaxext builds are approximately as slow as during stage0, rustc_plugins 2 minutes, one CGU. stage2: rustdoc took 2 minutes to build, one CGU compiletest suite=run-make mode=run-make: It looks like there is a single test that takes around 3 minutes to complete and has no parallelization. Testing alloc stage1: building liballoc takes around a minute Testing syntax stage1: building syntax takes 1.5 minutes, one CGU Notes: When the load average dropped towards 1, I assumed only one codegen unit was active. The script was only applied to the default pullrequest travis-ci configuration.\nAs shown in , the CPUs assigned to each job may have some performance difference: Intel(R) Xeon(R) CPU E5-2630 v3 @ 2.40GHz Intel(R) Xeon(R) CPU E5-2680 v3 @ 2.50GHz The clock-rate 2.4 GHz vs 2.5 GHz shouldn't make any noticeable difference though (this would at most slow down by 7.2 minutes out of 3 hours if everything is CPU-bound). It is not enough to explain the timeout in .\nI was working on recently for this where it periodically prints out the CPU usage as a percentage for the whole system (aka 1 core on a 4 core machine is 25%). I only got Linux/OSX working though and was unable to figure out a good way to do it on Windows. My thinking for how we'd do this is probably download a binary near the beginning of the build (or set up some script). We'd then run just before we run . That way we could correlate the two timestamps of each log (the main log and the to similar moments in time. Initially I was also thinking we'd just at the end of the build and scrape it later if need be.", "positive_passages": [{"docid": "doc-en-rust-8ab9e0a27c1da409968a249131796085dc211fe1dd96aaa82ab6267f3719955b", "text": " #!/usr/bin/env python # ignore-tidy-linelength # This is a small script that we use on CI to collect CPU usage statistics of # our builders. By seeing graphs of CPU usage over time we hope to correlate # that with possible improvements to Rust's own build system, ideally diagnosing # that either builders are always fully using their CPU resources or they're # idle for long stretches of time. # # This script is relatively simple, but it's platform specific. Each platform # (OSX/Windows/Linux) has a different way of calculating the current state of # CPU at a point in time. We then compare two captured states to determine the # percentage of time spent in one state versus another. The state capturing is # all platform-specific but the loop at the bottom is the cross platform part # that executes everywhere. # # # Viewing statistics # # All builders will upload their CPU statistics as CSV files to our S3 buckets. # These URLS look like: # # https://$bucket.s3.amazonaws.com/rustc-builds/$commit/cpu-$builder.csv # # for example # # https://rust-lang-ci2.s3.amazonaws.com/rustc-builds/68baada19cd5340f05f0db15a3e16d6671609bcc/cpu-x86_64-apple.csv # # Each CSV file has two columns. The first is the timestamp of the measurement # and the second column is the % of idle cpu time in that time slice. Ideally # the second column is always zero. # # Once you've downloaded a file there's various ways to plot it and visualize # it. For command line usage you can use a script like so: # # set timefmt '%Y-%m-%dT%H:%M:%S' # set xdata time # set ylabel \"Idle CPU %\" # set xlabel \"Time\" # set datafile sep ',' # set term png # set output \"printme.png\" # set grid # builder = \"i686-apple\" # plot \"cpu-\".builder.\".csv\" using 1:2 with lines title builder # # Executed as `gnuplot < ./foo.plot` it will generate a graph called # `printme.png` which you can then open up. If you know how to improve this # script or the viewing process that would be much appreciated :) (or even if # you know how to automate it!) import datetime import sys import time if sys.platform == 'linux2': class State: def __init__(self): with open('/proc/stat', 'r') as file: data = file.readline().split() if data[0] != 'cpu': raise Exception('did not start with \"cpu\"') self.user = int(data[1]) self.nice = int(data[2]) self.system = int(data[3]) self.idle = int(data[4]) self.iowait = int(data[5]) self.irq = int(data[6]) self.softirq = int(data[7]) self.steal = int(data[8]) self.guest = int(data[9]) self.guest_nice = int(data[10]) def idle_since(self, prev): user = self.user - prev.user nice = self.nice - prev.nice system = self.system - prev.system idle = self.idle - prev.idle iowait = self.iowait - prev.iowait irq = self.irq - prev.irq softirq = self.softirq - prev.softirq steal = self.steal - prev.steal guest = self.guest - prev.guest guest_nice = self.guest_nice - prev.guest_nice total = user + nice + system + idle + iowait + irq + softirq + steal + guest + guest_nice return float(idle) / float(total) * 100 elif sys.platform == 'win32': from ctypes.wintypes import DWORD from ctypes import Structure, windll, WinError, GetLastError, byref class FILETIME(Structure): _fields_ = [ (\"dwLowDateTime\", DWORD), (\"dwHighDateTime\", DWORD), ] class State: def __init__(self): idle, kernel, user = FILETIME(), FILETIME(), FILETIME() success = windll.kernel32.GetSystemTimes( byref(idle), byref(kernel), byref(user), ) assert success, WinError(GetLastError())[1] self.idle = (idle.dwHighDateTime << 32) | idle.dwLowDateTime self.kernel = (kernel.dwHighDateTime << 32) | kernel.dwLowDateTime self.user = (user.dwHighDateTime << 32) | user.dwLowDateTime def idle_since(self, prev): idle = self.idle - prev.idle user = self.user - prev.user kernel = self.kernel - prev.kernel return float(idle) / float(user + kernel) * 100 elif sys.platform == 'darwin': from ctypes import * libc = cdll.LoadLibrary('/usr/lib/libc.dylib') PROESSOR_CPU_LOAD_INFO = c_int(2) CPU_STATE_USER = 0 CPU_STATE_SYSTEM = 1 CPU_STATE_IDLE = 2 CPU_STATE_NICE = 3 c_int_p = POINTER(c_int) class State: def __init__(self): num_cpus_u = c_uint(0) cpu_info = c_int_p() cpu_info_cnt = c_int(0) err = libc.host_processor_info( libc.mach_host_self(), PROESSOR_CPU_LOAD_INFO, byref(num_cpus_u), byref(cpu_info), byref(cpu_info_cnt), ) assert err == 0 self.user = 0 self.system = 0 self.idle = 0 self.nice = 0 cur = 0 while cur < cpu_info_cnt.value: self.user += cpu_info[cur + CPU_STATE_USER] self.system += cpu_info[cur + CPU_STATE_SYSTEM] self.idle += cpu_info[cur + CPU_STATE_IDLE] self.nice += cpu_info[cur + CPU_STATE_NICE] cur += num_cpus_u.value def idle_since(self, prev): user = self.user - prev.user system = self.system - prev.system idle = self.idle - prev.idle nice = self.nice - prev.nice return float(idle) / float(user + system + idle + nice) * 100.0 else: print('unknown platform', sys.platform) sys.exit(1) cur_state = State(); print(\"Time,Idle\") while True: time.sleep(1); next_state = State(); now = datetime.datetime.utcnow().isoformat() idle = next_state.idle_since(cur_state) print(\"%s,%s\" % (now, idle)) sys.stdout.flush() cur_state = next_state ", "commid": "rust_pr_61632"}], "negative_passages": []}
{"query_id": "q-en-rust-a0777ab808493aa30d4546d94b8d6f838d0529d50b0d181e0b50b951924fd1cd", "query": "Updated to via rustup. I have a and additional which specify additional rustc arguments. Now if I run The build will succeed but then fail to run qemu because Instead, there's a file named in the directory where xargo was run from. To fix this I had to change the target json file with these options: I'm afraid this could subtly break my builds elsewhere, as the linker is, in fact, GNU ld. I believe this is related to the latest -pie/-no-pie patches which somehow interfere with custom cargo linker arguments specifications (where ends up landing instead of output file name).\nSeems like is likely to be related?\nThat's my suspicion\nhmm my change shouldn't have impacted when the -pie flag was sent, only the -no-pie flag. what version were you using before?\nCould you please build with and post the output related to the link step (after the line)\ntriage: P-high Not assigning this to anyone immediately, since seems to be on it -- if you could take a shot at supplying that debug output it would be very helpful, thanks. =)\ndoes that contain the info you needed?\nIt does look weird as I don't see any standalone in the command line.\nYes, that helps. I think this is what is happening: Even though you have set in your target file to produce position-independent-executables (pass the \"-pie\" flag), the code overrides this because it sees \"-static\" on the linker's command line and decides not to generate PIE. My PR made it pass \"-no-pie\" in this case, and fall back to removing -no-pie if the link fails For some reason this seems to be confusing ld instead of causing it to generate an error Can you confirm this by manually running: (same command line without -no-pie) to see if it works correctly, and also a ? I need to look more into ld to see if -no-pie is even necessary for it, I think maybe the default pie stuff might only be applicable with the gcc front-end. If so, I think the fix would be to change the code not to generate -no-pie if linking with ld.\nnm, I think I was able to reproduce this. I don't see -no-pie documented for ld so I think that is the problem. I'll work on a fix.\nThanks, I will test this asap!\nWorks! Thank you!", "positive_passages": [{"docid": "doc-en-rust-4f04670d5bf66f58d0b6ad0f1cd3d0720b9158978c20aab7875293b778b7f0d7", "text": "// linking executables as pie. Different versions of gcc seem to use // different quotes in the error message so don't check for them. if sess.target.target.options.linker_is_gnu && sess.linker_flavor() != LinkerFlavor::Ld && (out.contains(\"unrecognized command line option\") || out.contains(\"unknown argument\")) && out.contains(\"-no-pie\") &&", "commid": "rust_pr_49329"}], "negative_passages": []}
{"query_id": "q-en-rust-a0777ab808493aa30d4546d94b8d6f838d0529d50b0d181e0b50b951924fd1cd", "query": "Updated to via rustup. I have a and additional which specify additional rustc arguments. Now if I run The build will succeed but then fail to run qemu because Instead, there's a file named in the directory where xargo was run from. To fix this I had to change the target json file with these options: I'm afraid this could subtly break my builds elsewhere, as the linker is, in fact, GNU ld. I believe this is related to the latest -pie/-no-pie patches which somehow interfere with custom cargo linker arguments specifications (where ends up landing instead of output file name).\nSeems like is likely to be related?\nThat's my suspicion\nhmm my change shouldn't have impacted when the -pie flag was sent, only the -no-pie flag. what version were you using before?\nCould you please build with and post the output related to the link step (after the line)\ntriage: P-high Not assigning this to anyone immediately, since seems to be on it -- if you could take a shot at supplying that debug output it would be very helpful, thanks. =)\ndoes that contain the info you needed?\nIt does look weird as I don't see any standalone in the command line.\nYes, that helps. I think this is what is happening: Even though you have set in your target file to produce position-independent-executables (pass the \"-pie\" flag), the code overrides this because it sees \"-static\" on the linker's command line and decides not to generate PIE. My PR made it pass \"-no-pie\" in this case, and fall back to removing -no-pie if the link fails For some reason this seems to be confusing ld instead of causing it to generate an error Can you confirm this by manually running: (same command line without -no-pie) to see if it works correctly, and also a ? I need to look more into ld to see if -no-pie is even necessary for it, I think maybe the default pie stuff might only be applicable with the gcc front-end. If so, I think the fix would be to change the code not to generate -no-pie if linking with ld.\nnm, I think I was able to reproduce this. I don't see -no-pie documented for ld so I think that is the problem. I'll work on a fix.\nThanks, I will test this asap!\nWorks! Thank you!", "positive_passages": [{"docid": "doc-en-rust-e8c472858b8da5e909abb968dd72d8abd6ce3dfafdb55879a0e8a4190248ba80", "text": "} else { // recent versions of gcc can be configured to generate position // independent executables by default. We have to pass -no-pie to // explicitly turn that off. if sess.target.target.options.linker_is_gnu { // explicitly turn that off. Not applicable to ld. if sess.target.target.options.linker_is_gnu && sess.linker_flavor() != LinkerFlavor::Ld { cmd.no_position_independent_executable(); } }", "commid": "rust_pr_49329"}], "negative_passages": []}
{"query_id": "q-en-rust-4490d0a807cebe407192754fbd723cc8ce3ceceb078d8316cc44a80dbc0b3018", "query": "Servo doesn\u2019t build in today\u2019s Nightly: Since the type is stable I think this is a breaking change. CC for\nThe type is stable, but that impl is . Edit: just realised that's not the impl in question, just wanted to say that I don't think it's a breaking change with regard to stable.\nOops. Indeed, that's a regression.\nEgad sorry about that! I've posted to revert this change", "positive_passages": [{"docid": "doc-en-rust-5e3e96aaa7e42a126e0f703fe5643d3293d219692e9df72ced5554211f0d9686", "text": "#[unstable(feature = \"proc_macro\", issue = \"38356\")] impl iter::FromIterator for tree in trees { builder.push(tree.to_internal()); for stream in streams { builder.push(stream.0); } TokenStream(builder.build()) }", "commid": "rust_pr_49734"}], "negative_passages": []}
{"query_id": "q-en-rust-295feeb296bf0f0df30ae3c5f7dad54e6805758ec4af569b0b0b10673d1bea5c", "query": "This program : See also , which I believe to be relevant. I'm refactoring some of this code, trying to explore the new NLL approach, so I will probably wind up fixing this. But obviously we should have this as a test! cc\nIs this similar to this ICE?", "positive_passages": [{"docid": "doc-en-rust-47939c0e492d3e1ea12fd762792dccddce09168a51075f06fa7512f488643a56", "text": "assigned_map: FxHashMap /// NOTE: A given location may activate more than one borrow in the future /// when more general two-phase borrow support is introduced, but for now we /// only need to store one borrow index activation_map: FxHashMap activation_map: FxHashMap activation_map: FxHashMap activation_map: FxHashMap // This assert is a good sanity check until more general 2-phase borrow // support is introduced. See NOTE on the activation_map field for more assert!(!self.activation_map.contains_key(&activate_location), \"More than one activation introduced at the same location.\"); self.activation_map.insert(activate_location, idx); insert(&mut self.activation_map, &activate_location, idx); insert(&mut self.assigned_map, assigned_place, idx); insert(&mut self.region_map, ®ion, idx); if let Some(local) = root_local(borrowed_place) {", "commid": "rust_pr_49678"}], "negative_passages": []}
{"query_id": "q-en-rust-295feeb296bf0f0df30ae3c5f7dad54e6805758ec4af569b0b0b10673d1bea5c", "query": "This program : See also , which I believe to be relevant. I'm refactoring some of this code, trying to explore the new NLL approach, so I will probably wind up fixing this. But obviously we should have this as a test! cc\nIs this similar to this ICE?", "positive_passages": [{"docid": "doc-en-rust-b8098b4610d9b99304f3d159cc585ec983833f3d60d122c34882818f0f0e9990", "text": "location: Location) { // Handle activations match self.activation_map.get(&location) { Some(&activated) => { debug!(\"activating borrow {:?}\", activated); sets.gen(&ReserveOrActivateIndex::active(activated)) Some(activations) => { for activated in activations { debug!(\"activating borrow {:?}\", activated); sets.gen(&ReserveOrActivateIndex::active(*activated)) } } None => {} }", "commid": "rust_pr_49678"}], "negative_passages": []}
{"query_id": "q-en-rust-295feeb296bf0f0df30ae3c5f7dad54e6805758ec4af569b0b0b10673d1bea5c", "query": "This program : See also , which I believe to be relevant. I'm refactoring some of this code, trying to explore the new NLL approach, so I will probably wind up fixing this. But obviously we should have this as a test! cc\nIs this similar to this ICE?", "positive_passages": [{"docid": "doc-en-rust-3c925445703941c9515139f54fe48cbf849517ec23b4b925dc8a9c057a66537b", "text": " // Copyright 2018 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0 pub fn memory_locals<'a, 'tcx>(fx: &FunctionCx<'a, 'tcx>) -> BitVector { pub fn non_ssa_locals<'a, 'tcx>(fx: &FunctionCx<'a, 'tcx>) -> BitVector { let mir = fx.mir; let mut analyzer = LocalAnalyzer::new(fx);", "commid": "rust_pr_50048"}], "negative_passages": []}
{"query_id": "q-en-rust-0a2ebeaca1b1a7e642446457b0ba8fba4cc1fe8df33138bc1266a136790039ec", "query": "Context: In order to experiment with an -aware , I was preparing to modify the lang item signature, and my first step was to ensure MIR passes the right type to begin with, as noted in Legitimately, I assumed that the MIR inline code was doing the right thing, so I mimicked it in . The resulting code actually works fine, as far as completing the 3 stages of rustc bootstrapping is involved. But before going the route for the -aware , I first tried removing the special handling of 's , trying to leave it to , shortcutting the lang item. This didn't go well, and produced a stage 1 compiler that would crash on a bad in libsyntax's . From which I derived a small test case that would exhibit the problem with my code. Anyways, I was going well over my head with this approach, thus switched to the signature change. So, what's the deal with this issue, will you ask? Well, it turns out that my MIR changes, essentially copied from MIR inlining, while they worked to produce an apparently working compiler, failed to compile that reduced testcase with an LLVM ERROR. I was wondering if I did something significantly different from what the MIT inlining pass was doing, so I tried to trigger it manually (since it's not enabled by default), and after some trial and error, got it to happen on a plain nightly compiler, with the following reduced testcase: Building with yields: (Note the is only there to force MIR inlining to happen without having to go over the required threshold ; liballoc's has ; similarly, the on avoids inlining of dealloc, to limit the effects on the MIR) Interestingly enough, and fail with the same error. The former outputs a truncated MIR (truncated at the entry of the first basic block), and the latter outputs nothing.\nThis is what from a run with looks like: Edit: and what the llvm-ir, as disassembled from output, looks like:\nThe problem is that the code path bb1-bb9-bb21-bb20 in IR, corresponding to bb0-bb1-bb6-bb5 in MIR, is impossible, yet exists. And in that code path, in MIR, in IR, is never initialized, and that's what LLVM doesn't like. I guess one way to look at is it to wonder why in MIR has a switch in the first place.\nAnd the answer to that last question is false edges, according to the following MIR from the mir_map pass:\nActually, the path bb0-bb7-bb4-bb10-bb12 has without ever being initialized, but that's pre-elaborate-drops, and I hear drop is different then, so I don't know if it bad or not.\nThe weird switch that eventually causes the LLVM problem is by the elaborate-drops pass. The MIR after that phase looks like the following:\nI think the most relevant fact is this difference between MIR inlining being enabled or not: with MIR inlining disabled, this basic block in MIR: becomes the following IR: with MIR inlining enabled, this basic block in MIR: becomes whiich is weird... where did the extra code go?\nThe changes from fix this issue.", "positive_passages": [{"docid": "doc-en-rust-21fd2ae3a8b7f55b78e548a4b17a3f408dbb7a8c099893d8a325aecd38e09a07", "text": "// (e.g. structs) into an alloca unconditionally, just so // that we don't have to deal with having two pathways // (gep vs extractvalue etc). analyzer.mark_as_memory(mir::Local::new(index)); analyzer.not_ssa(mir::Local::new(index)); } } analyzer.memory_locals analyzer.non_ssa_locals } struct LocalAnalyzer<'mir, 'a: 'mir, 'tcx: 'a> { fx: &'mir FunctionCx<'a, 'tcx>, memory_locals: BitVector, seen_assigned: BitVector dominators: Dominators memory_locals: BitVector::new(fx.mir.local_decls.len()), seen_assigned: BitVector::new(fx.mir.local_decls.len()) dominators: fx.mir.dominators(), non_ssa_locals: BitVector::new(fx.mir.local_decls.len()), first_assignment: IndexVec::from_elem(invalid_location, &fx.mir.local_decls) }; // Arguments get assigned to by means of the function being called for idx in 0..fx.mir.arg_count { analyzer.seen_assigned.insert(idx + 1); for arg in fx.mir.args_iter() { analyzer.first_assignment[arg] = mir::START_BLOCK.start_location(); } analyzer } fn mark_as_memory(&mut self, local: mir::Local) { debug!(\"marking {:?} as memory\", local); self.memory_locals.insert(local.index()); fn first_assignment(&self, local: mir::Local) -> Option fn mark_assigned(&mut self, local: mir::Local) { if !self.seen_assigned.insert(local.index()) { self.mark_as_memory(local); fn not_ssa(&mut self, local: mir::Local) { debug!(\"marking {:?} as non-SSA\", local); self.non_ssa_locals.insert(local.index()); } fn assign(&mut self, local: mir::Local, location: Location) { if self.first_assignment(local).is_some() { self.not_ssa(local); } else { self.first_assignment[local] = location; } } }", "commid": "rust_pr_50048"}], "negative_passages": []}
{"query_id": "q-en-rust-0a2ebeaca1b1a7e642446457b0ba8fba4cc1fe8df33138bc1266a136790039ec", "query": "Context: In order to experiment with an -aware , I was preparing to modify the lang item signature, and my first step was to ensure MIR passes the right type to begin with, as noted in Legitimately, I assumed that the MIR inline code was doing the right thing, so I mimicked it in . The resulting code actually works fine, as far as completing the 3 stages of rustc bootstrapping is involved. But before going the route for the -aware , I first tried removing the special handling of 's , trying to leave it to , shortcutting the lang item. This didn't go well, and produced a stage 1 compiler that would crash on a bad in libsyntax's . From which I derived a small test case that would exhibit the problem with my code. Anyways, I was going well over my head with this approach, thus switched to the signature change. So, what's the deal with this issue, will you ask? Well, it turns out that my MIR changes, essentially copied from MIR inlining, while they worked to produce an apparently working compiler, failed to compile that reduced testcase with an LLVM ERROR. I was wondering if I did something significantly different from what the MIT inlining pass was doing, so I tried to trigger it manually (since it's not enabled by default), and after some trial and error, got it to happen on a plain nightly compiler, with the following reduced testcase: Building with yields: (Note the is only there to force MIR inlining to happen without having to go over the required threshold ; liballoc's has ; similarly, the on avoids inlining of dealloc, to limit the effects on the MIR) Interestingly enough, and fail with the same error. The former outputs a truncated MIR (truncated at the entry of the first basic block), and the latter outputs nothing.\nThis is what from a run with looks like: Edit: and what the llvm-ir, as disassembled from output, looks like:\nThe problem is that the code path bb1-bb9-bb21-bb20 in IR, corresponding to bb0-bb1-bb6-bb5 in MIR, is impossible, yet exists. And in that code path, in MIR, in IR, is never initialized, and that's what LLVM doesn't like. I guess one way to look at is it to wonder why in MIR has a switch in the first place.\nAnd the answer to that last question is false edges, according to the following MIR from the mir_map pass:\nActually, the path bb0-bb7-bb4-bb10-bb12 has without ever being initialized, but that's pre-elaborate-drops, and I hear drop is different then, so I don't know if it bad or not.\nThe weird switch that eventually causes the LLVM problem is by the elaborate-drops pass. The MIR after that phase looks like the following:\nI think the most relevant fact is this difference between MIR inlining being enabled or not: with MIR inlining disabled, this basic block in MIR: becomes the following IR: with MIR inlining enabled, this basic block in MIR: becomes whiich is weird... where did the extra code go?\nThe changes from fix this issue.", "positive_passages": [{"docid": "doc-en-rust-fd37b77010d9fae078b1c5f6c388f85f5f53889c8ff7eb67ce3019a3818d9ade", "text": "debug!(\"visit_assign(block={:?}, place={:?}, rvalue={:?})\", block, place, rvalue); if let mir::Place::Local(index) = *place { self.mark_assigned(index); self.assign(index, location); if !self.fx.rvalue_creates_operand(rvalue) { self.mark_as_memory(index); self.not_ssa(index); } } else { self.visit_place(place, PlaceContext::Store, location);", "commid": "rust_pr_50048"}], "negative_passages": []}
{"query_id": "q-en-rust-0a2ebeaca1b1a7e642446457b0ba8fba4cc1fe8df33138bc1266a136790039ec", "query": "Context: In order to experiment with an -aware , I was preparing to modify the lang item signature, and my first step was to ensure MIR passes the right type to begin with, as noted in Legitimately, I assumed that the MIR inline code was doing the right thing, so I mimicked it in . The resulting code actually works fine, as far as completing the 3 stages of rustc bootstrapping is involved. But before going the route for the -aware , I first tried removing the special handling of 's , trying to leave it to , shortcutting the lang item. This didn't go well, and produced a stage 1 compiler that would crash on a bad in libsyntax's . From which I derived a small test case that would exhibit the problem with my code. Anyways, I was going well over my head with this approach, thus switched to the signature change. So, what's the deal with this issue, will you ask? Well, it turns out that my MIR changes, essentially copied from MIR inlining, while they worked to produce an apparently working compiler, failed to compile that reduced testcase with an LLVM ERROR. I was wondering if I did something significantly different from what the MIT inlining pass was doing, so I tried to trigger it manually (since it's not enabled by default), and after some trial and error, got it to happen on a plain nightly compiler, with the following reduced testcase: Building with yields: (Note the is only there to force MIR inlining to happen without having to go over the required threshold ; liballoc's has ; similarly, the on avoids inlining of dealloc, to limit the effects on the MIR) Interestingly enough, and fail with the same error. The former outputs a truncated MIR (truncated at the entry of the first basic block), and the latter outputs nothing.\nThis is what from a run with looks like: Edit: and what the llvm-ir, as disassembled from output, looks like:\nThe problem is that the code path bb1-bb9-bb21-bb20 in IR, corresponding to bb0-bb1-bb6-bb5 in MIR, is impossible, yet exists. And in that code path, in MIR, in IR, is never initialized, and that's what LLVM doesn't like. I guess one way to look at is it to wonder why in MIR has a switch in the first place.\nAnd the answer to that last question is false edges, according to the following MIR from the mir_map pass:\nActually, the path bb0-bb7-bb4-bb10-bb12 has without ever being initialized, but that's pre-elaborate-drops, and I hear drop is different then, so I don't know if it bad or not.\nThe weird switch that eventually causes the LLVM problem is by the elaborate-drops pass. The MIR after that phase looks like the following:\nI think the most relevant fact is this difference between MIR inlining being enabled or not: with MIR inlining disabled, this basic block in MIR: becomes the following IR: with MIR inlining enabled, this basic block in MIR: becomes whiich is weird... where did the extra code go?\nThe changes from fix this issue.", "positive_passages": [{"docid": "doc-en-rust-d192cc8bb2369d941821ea7e0062a0dbd1646f02290dd9e361d6b24ccafcba66", "text": "if layout.is_llvm_immediate() || layout.is_llvm_scalar_pair() { // Recurse with the same context, instead of `Projection`, // potentially stopping at non-operand projections, // which would trigger `mark_as_memory` on locals. // which would trigger `not_ssa` on locals. self.visit_place(&proj.base, context, location); return; }", "commid": "rust_pr_50048"}], "negative_passages": []}
{"query_id": "q-en-rust-0a2ebeaca1b1a7e642446457b0ba8fba4cc1fe8df33138bc1266a136790039ec", "query": "Context: In order to experiment with an -aware , I was preparing to modify the lang item signature, and my first step was to ensure MIR passes the right type to begin with, as noted in Legitimately, I assumed that the MIR inline code was doing the right thing, so I mimicked it in . The resulting code actually works fine, as far as completing the 3 stages of rustc bootstrapping is involved. But before going the route for the -aware , I first tried removing the special handling of 's , trying to leave it to , shortcutting the lang item. This didn't go well, and produced a stage 1 compiler that would crash on a bad in libsyntax's . From which I derived a small test case that would exhibit the problem with my code. Anyways, I was going well over my head with this approach, thus switched to the signature change. So, what's the deal with this issue, will you ask? Well, it turns out that my MIR changes, essentially copied from MIR inlining, while they worked to produce an apparently working compiler, failed to compile that reduced testcase with an LLVM ERROR. I was wondering if I did something significantly different from what the MIT inlining pass was doing, so I tried to trigger it manually (since it's not enabled by default), and after some trial and error, got it to happen on a plain nightly compiler, with the following reduced testcase: Building with yields: (Note the is only there to force MIR inlining to happen without having to go over the required threshold ; liballoc's has ; similarly, the on avoids inlining of dealloc, to limit the effects on the MIR) Interestingly enough, and fail with the same error. The former outputs a truncated MIR (truncated at the entry of the first basic block), and the latter outputs nothing.\nThis is what from a run with looks like: Edit: and what the llvm-ir, as disassembled from output, looks like:\nThe problem is that the code path bb1-bb9-bb21-bb20 in IR, corresponding to bb0-bb1-bb6-bb5 in MIR, is impossible, yet exists. And in that code path, in MIR, in IR, is never initialized, and that's what LLVM doesn't like. I guess one way to look at is it to wonder why in MIR has a switch in the first place.\nAnd the answer to that last question is false edges, according to the following MIR from the mir_map pass:\nActually, the path bb0-bb7-bb4-bb10-bb12 has without ever being initialized, but that's pre-elaborate-drops, and I hear drop is different then, so I don't know if it bad or not.\nThe weird switch that eventually causes the LLVM problem is by the elaborate-drops pass. The MIR after that phase looks like the following:\nI think the most relevant fact is this difference between MIR inlining being enabled or not: with MIR inlining disabled, this basic block in MIR: becomes the following IR: with MIR inlining enabled, this basic block in MIR: becomes whiich is weird... where did the extra code go?\nThe changes from fix this issue.", "positive_passages": [{"docid": "doc-en-rust-6904f1290897186eb4fde172750a8aa3ab54f24f4629feea1f79ea1715c4453e", "text": "} fn visit_local(&mut self, &index: &mir::Local, &local: &mir::Local, context: PlaceContext<'tcx>, _: Location) { location: Location) { match context { PlaceContext::Call => { self.mark_assigned(index); self.assign(local, location); } PlaceContext::StorageLive | PlaceContext::StorageDead | PlaceContext::Validate | PlaceContext::Validate => {} PlaceContext::Copy | PlaceContext::Move => {} PlaceContext::Move => { // Reads from uninitialized variables (e.g. in dead code, after // optimizations) require locals to be in (uninitialized) memory. // NB: there can be uninitialized reads of a local visited after // an assignment to that local, if they happen on disjoint paths. let ssa_read = match self.first_assignment(local) { Some(assignment_location) => { assignment_location.dominates(location, &self.dominators) } None => false }; if !ssa_read { self.not_ssa(local); } } PlaceContext::Inspect | PlaceContext::Store | PlaceContext::AsmOutput | PlaceContext::Borrow { .. } | PlaceContext::Projection(..) => { self.mark_as_memory(index); self.not_ssa(local); } PlaceContext::Drop => { let ty = mir::Place::Local(index).ty(self.fx.mir, self.fx.cx.tcx); let ty = mir::Place::Local(local).ty(self.fx.mir, self.fx.cx.tcx); let ty = self.fx.monomorphize(&ty.to_ty(self.fx.cx.tcx)); // Only need the place if we're actually dropping it. if self.fx.cx.type_needs_drop(ty) { self.mark_as_memory(index); self.not_ssa(local); } } }", "commid": "rust_pr_50048"}], "negative_passages": []}
{"query_id": "q-en-rust-0a2ebeaca1b1a7e642446457b0ba8fba4cc1fe8df33138bc1266a136790039ec", "query": "Context: In order to experiment with an -aware , I was preparing to modify the lang item signature, and my first step was to ensure MIR passes the right type to begin with, as noted in Legitimately, I assumed that the MIR inline code was doing the right thing, so I mimicked it in . The resulting code actually works fine, as far as completing the 3 stages of rustc bootstrapping is involved. But before going the route for the -aware , I first tried removing the special handling of 's , trying to leave it to , shortcutting the lang item. This didn't go well, and produced a stage 1 compiler that would crash on a bad in libsyntax's . From which I derived a small test case that would exhibit the problem with my code. Anyways, I was going well over my head with this approach, thus switched to the signature change. So, what's the deal with this issue, will you ask? Well, it turns out that my MIR changes, essentially copied from MIR inlining, while they worked to produce an apparently working compiler, failed to compile that reduced testcase with an LLVM ERROR. I was wondering if I did something significantly different from what the MIT inlining pass was doing, so I tried to trigger it manually (since it's not enabled by default), and after some trial and error, got it to happen on a plain nightly compiler, with the following reduced testcase: Building with yields: (Note the is only there to force MIR inlining to happen without having to go over the required threshold ; liballoc's has ; similarly, the on avoids inlining of dealloc, to limit the effects on the MIR) Interestingly enough, and fail with the same error. The former outputs a truncated MIR (truncated at the entry of the first basic block), and the latter outputs nothing.\nThis is what from a run with looks like: Edit: and what the llvm-ir, as disassembled from output, looks like:\nThe problem is that the code path bb1-bb9-bb21-bb20 in IR, corresponding to bb0-bb1-bb6-bb5 in MIR, is impossible, yet exists. And in that code path, in MIR, in IR, is never initialized, and that's what LLVM doesn't like. I guess one way to look at is it to wonder why in MIR has a switch in the first place.\nAnd the answer to that last question is false edges, according to the following MIR from the mir_map pass:\nActually, the path bb0-bb7-bb4-bb10-bb12 has without ever being initialized, but that's pre-elaborate-drops, and I hear drop is different then, so I don't know if it bad or not.\nThe weird switch that eventually causes the LLVM problem is by the elaborate-drops pass. The MIR after that phase looks like the following:\nI think the most relevant fact is this difference between MIR inlining being enabled or not: with MIR inlining disabled, this basic block in MIR: becomes the following IR: with MIR inlining enabled, this basic block in MIR: becomes whiich is weird... where did the extra code go?\nThe changes from fix this issue.", "positive_passages": [{"docid": "doc-en-rust-60b5fa2bfdb77d8d67a1b948d31c2c9dc477ab531bd33074b9a6c2164d71cfff", "text": "}, }; let memory_locals = analyze::memory_locals(&fx); let memory_locals = analyze::non_ssa_locals(&fx); // Allocate variable and temp allocas fx.locals = {", "commid": "rust_pr_50048"}], "negative_passages": []}
{"query_id": "q-en-rust-ba68e05e233633fd548f84c5230d643f7efab0ada1389ab2ed74950ffb7335fe", "query": "When I run the test suite on campus, I get a failure in : The test says \"this IP is unroutable, so connections should always time out,\" but evidently there's something about the setup at this location that was not anticipated. The results are the same whether I am connected to WiFi or through ethernet. Both connections go through the same router (a Linksys WRT120N I have physical access to, which doesn't appear to save any sort of logs), and I don't know where it goes after that. I can work around it by disconnecting from the internet entirely, and do not have issues when running the tests at home (same laptop, different network).\nis your subnet 10.255.255.0/24 or 10.255.0.0/16 or 10.0.0.0/8 or anything similar? You can check by running . Either way, the test is ill formed and should be correctled.\nI'm not there right now, but just in case I forget to come back to this, I do recall that the router is accessed through an IP in the 192.168.x.x subnet. (I'm not sure if this precludes the situation you have described) (I wish GitHub notifications had a \"mark unread\" feature!)\nIt appears not to be. wifi ethernet:\nI think I must have misread something when making that test - the IP is inside of an address block reserved for private networks which means it could be routable depending on how your network is set up. We could instead use an IP from one of the documentation blocks like 192.0.2.0.\nThe address 10.255.255.1 is routable on my vanilla Ubuntu 18.04 box: The address 192.0.2.0 is not routable: Therefore, this test always fails on this box. I will file a PR changing it to 192.0.2.0.\nThis is a dup of .", "positive_passages": [{"docid": "doc-en-rust-370b94bc5b44edae3b733a3c8ed8438123a981a213b848cbc52be4497314655a", "text": "} #[test] fn connect_timeout_unroutable() { // this IP is unroutable, so connections should always time out, // provided the network is reachable to begin with. let addr = \"10.255.255.1:80\".parse().unwrap(); let e = TcpStream::connect_timeout(&addr, Duration::from_millis(250)).unwrap_err(); assert!(e.kind() == io::ErrorKind::TimedOut || e.kind() == io::ErrorKind::Other, \"bad error: {} {:?}\", e, e.kind()); } #[test] fn connect_timeout_unbound() { // bind and drop a socket to track down a \"probably unassigned\" port let socket = TcpListener::bind(\"127.0.0.1:0\").unwrap();", "commid": "rust_pr_57584"}], "negative_passages": []}
{"query_id": "q-en-rust-d2884a1b6c1ce7c85420c3ad44910851fa8a8c5a763de172c053a6ca52fff6de", "query": "I think this is the fourth beta I'm having to re-figure-out how all this works and figure out how to ignore their failures on beta. can we just exclude these tools from toolstate until this is figured out?\nI'm slightly confused, why would these tools need to be disabled/their submodule update if they were building on master? Isn't the beta just the current master branch + some tweaks for --version output and no unstable features? The release in March actually prevented the merge of any PRs on master that would break a tool. So I thought the idea was to fix all tools in that week. Or is this about beta backports breaking tools?\nI don't think we've had a beta yet whether either clippy or miri were passing tests and fails the build if any tool fails to build on beta. We also do not want to release either of these tools to stable/beta yet.\nneither of these tools has a step, so there's no danger in that (the last beta) didn't have clippy issues. Since we have grace period of a week, I can easily get in a working clippy within that week (which then won't be broken until the beta). My issue is mainly that I don't really know when it begins ;) Maybe we can ping the tool authors once a day that their tool needs to be fixed for the upcoming beta?\nMy point with this issue is that we shouldn't automate ourselves into a hole where tools we do not want to ship block releases. That should not require authors to fix tools, we should simply ignore failures or have an easy way of disabling them. Right now producing a beta is pain as you have to resend it to bors multiple times after re-learning that these are blocking the release. We do not ship miri/clippy but a successful clippy compilation affects how rls is compiled, which should not be happening on beta/stable.\nThe code that influences rls is explicitly skipped on beta/stable, so the rls is completely free of clippy on beta/stable for miri I agree, but for clippy, aren't we trying to move to a model where we actually dist it on stable, so I'd call having it ready for stable some form of proof of concept for the distributing part.\nOk sure yes rls is protected but I feel like this is missing my point. Miri isn't ready. Clippy isn't ready. They're blocking beta releases. That shouldn't happen.", "positive_passages": [{"docid": "doc-en-rust-404d6e78cf72132ed2f1e1b6234d4326bef6c8d83632640ebcca042ecd619e22", "text": "touch \"$TOOLSTATE_FILE\" # Try to test all the tools and store the build/test success in the TOOLSTATE_FILE set +e python2.7 \"$X_PY\" test --no-fail-fast src/doc/book ", "commid": "rust_pr_50573"}], "negative_passages": []}
{"query_id": "q-en-rust-d2884a1b6c1ce7c85420c3ad44910851fa8a8c5a763de172c053a6ca52fff6de", "query": "I think this is the fourth beta I'm having to re-figure-out how all this works and figure out how to ignore their failures on beta. can we just exclude these tools from toolstate until this is figured out?\nI'm slightly confused, why would these tools need to be disabled/their submodule update if they were building on master? Isn't the beta just the current master branch + some tweaks for --version output and no unstable features? The release in March actually prevented the merge of any PRs on master that would break a tool. So I thought the idea was to fix all tools in that week. Or is this about beta backports breaking tools?\nI don't think we've had a beta yet whether either clippy or miri were passing tests and fails the build if any tool fails to build on beta. We also do not want to release either of these tools to stable/beta yet.\nneither of these tools has a step, so there's no danger in that (the last beta) didn't have clippy issues. Since we have grace period of a week, I can easily get in a working clippy within that week (which then won't be broken until the beta). My issue is mainly that I don't really know when it begins ;) Maybe we can ping the tool authors once a day that their tool needs to be fixed for the upcoming beta?\nMy point with this issue is that we shouldn't automate ourselves into a hole where tools we do not want to ship block releases. That should not require authors to fix tools, we should simply ignore failures or have an easy way of disabling them. Right now producing a beta is pain as you have to resend it to bors multiple times after re-learning that these are blocking the release. We do not ship miri/clippy but a successful clippy compilation affects how rls is compiled, which should not be happening on beta/stable.\nThe code that influences rls is explicitly skipped on beta/stable, so the rls is completely free of clippy on beta/stable for miri I agree, but for clippy, aren't we trying to move to a model where we actually dist it on stable, so I'd call having it ready for stable some form of proof of concept for the distributing part.\nOk sure yes rls is protected but I feel like this is missing my point. Miri isn't ready. Clippy isn't ready. They're blocking beta releases. That shouldn't happen.", "positive_passages": [{"docid": "doc-en-rust-80adeffd8f7aea2e506dd9af7a37033dbede7940eb63883a53160c95e71230e4", "text": "cat \"$TOOLSTATE_FILE\" echo # This function checks that if a tool's submodule changed, the tool's state must improve verify_status() { echo \"Verifying status of $1...\" if echo \"$CHANGED_FILES\" | grep -q \"^M[[:blank:]]$2$\"; then", "commid": "rust_pr_50573"}], "negative_passages": []}
{"query_id": "q-en-rust-d2884a1b6c1ce7c85420c3ad44910851fa8a8c5a763de172c053a6ca52fff6de", "query": "I think this is the fourth beta I'm having to re-figure-out how all this works and figure out how to ignore their failures on beta. can we just exclude these tools from toolstate until this is figured out?\nI'm slightly confused, why would these tools need to be disabled/their submodule update if they were building on master? Isn't the beta just the current master branch + some tweaks for --version output and no unstable features? The release in March actually prevented the merge of any PRs on master that would break a tool. So I thought the idea was to fix all tools in that week. Or is this about beta backports breaking tools?\nI don't think we've had a beta yet whether either clippy or miri were passing tests and fails the build if any tool fails to build on beta. We also do not want to release either of these tools to stable/beta yet.\nneither of these tools has a step, so there's no danger in that (the last beta) didn't have clippy issues. Since we have grace period of a week, I can easily get in a working clippy within that week (which then won't be broken until the beta). My issue is mainly that I don't really know when it begins ;) Maybe we can ping the tool authors once a day that their tool needs to be fixed for the upcoming beta?\nMy point with this issue is that we shouldn't automate ourselves into a hole where tools we do not want to ship block releases. That should not require authors to fix tools, we should simply ignore failures or have an easy way of disabling them. Right now producing a beta is pain as you have to resend it to bors multiple times after re-learning that these are blocking the release. We do not ship miri/clippy but a successful clippy compilation affects how rls is compiled, which should not be happening on beta/stable.\nThe code that influences rls is explicitly skipped on beta/stable, so the rls is completely free of clippy on beta/stable for miri I agree, but for clippy, aren't we trying to move to a model where we actually dist it on stable, so I'd call having it ready for stable some form of proof of concept for the distributing part.\nOk sure yes rls is protected but I feel like this is missing my point. Miri isn't ready. Clippy isn't ready. They're blocking beta releases. That shouldn't happen.", "positive_passages": [{"docid": "doc-en-rust-dc22cdea4e93d83e8022dd65f88645e01893f33b2129d0de9e9bb06b3a6516ce", "text": "fi } # deduplicates the submodule check and the assertion that on beta some tools MUST be passing check_dispatch() { if [ \"$1\" = submodule_changed ]; then # ignore $2 (branch id) verify_status $3 $4 elif [ \"$2\" = beta ]; then echo \"Requiring test passing for $3...\" if grep -q '\"'\"$3\"'\":\"(test|build)-fail\"' \"$TOOLSTATE_FILE\"; then exit 4 fi fi } # list all tools here status_check() { check_dispatch $1 beta book src/doc/book check_dispatch $1 beta nomicon src/doc/nomicon check_dispatch $1 beta reference src/doc/reference check_dispatch $1 beta rust-by-example src/doc/rust-by-example check_dispatch $1 beta rls src/tool/rls check_dispatch $1 beta rustfmt src/tool/rustfmt # these tools are not required for beta to successfully branch check_dispatch $1 nightly clippy-driver src/tool/clippy check_dispatch $1 nightly miri src/tool/miri } # If this PR is intended to update one of these tools, do not let the build pass # when they do not test-pass. verify_status book src/doc/book verify_status nomicon src/doc/nomicon verify_status reference src/doc/reference verify_status rust-by-example src/doc/rust-by-example verify_status rls src/tool/rls verify_status rustfmt src/tool/rustfmt verify_status clippy-driver src/tool/clippy verify_status miri src/tool/miri status_check \"submodule_changed\" if [ \"$RUST_RELEASE_CHANNEL\" = nightly -a -n \"${TOOLSTATE_REPO_ACCESS_TOKEN+is_set}\" ]; then . \"$(dirname $0)/repo.sh\"", "commid": "rust_pr_50573"}], "negative_passages": []}
{"query_id": "q-en-rust-d2884a1b6c1ce7c85420c3ad44910851fa8a8c5a763de172c053a6ca52fff6de", "query": "I think this is the fourth beta I'm having to re-figure-out how all this works and figure out how to ignore their failures on beta. can we just exclude these tools from toolstate until this is figured out?\nI'm slightly confused, why would these tools need to be disabled/their submodule update if they were building on master? Isn't the beta just the current master branch + some tweaks for --version output and no unstable features? The release in March actually prevented the merge of any PRs on master that would break a tool. So I thought the idea was to fix all tools in that week. Or is this about beta backports breaking tools?\nI don't think we've had a beta yet whether either clippy or miri were passing tests and fails the build if any tool fails to build on beta. We also do not want to release either of these tools to stable/beta yet.\nneither of these tools has a step, so there's no danger in that (the last beta) didn't have clippy issues. Since we have grace period of a week, I can easily get in a working clippy within that week (which then won't be broken until the beta). My issue is mainly that I don't really know when it begins ;) Maybe we can ping the tool authors once a day that their tool needs to be fixed for the upcoming beta?\nMy point with this issue is that we shouldn't automate ourselves into a hole where tools we do not want to ship block releases. That should not require authors to fix tools, we should simply ignore failures or have an easy way of disabling them. Right now producing a beta is pain as you have to resend it to bors multiple times after re-learning that these are blocking the release. We do not ship miri/clippy but a successful clippy compilation affects how rls is compiled, which should not be happening on beta/stable.\nThe code that influences rls is explicitly skipped on beta/stable, so the rls is completely free of clippy on beta/stable for miri I agree, but for clippy, aren't we trying to move to a model where we actually dist it on stable, so I'd call having it ready for stable some form of proof of concept for the distributing part.\nOk sure yes rls is protected but I feel like this is missing my point. Miri isn't ready. Clippy isn't ready. They're blocking beta releases. That shouldn't happen.", "positive_passages": [{"docid": "doc-en-rust-c07a8d7c3780da87d21f3d68ce8d02ab9576f8692fa07bdc2c2cf13cc2b9d862", "text": "exit 0 fi if grep -q fail \"$TOOLSTATE_FILE\"; then exit 4 fi # abort compilation if an important tool doesn't build # (this code is reachable if not on the nightly channel) status_check \"beta_required\" ", "commid": "rust_pr_50573"}], "negative_passages": []}
{"query_id": "q-en-rust-17e28c23ef88ba0e1b08da6cf477a82e843c5ef036017727d25ade62c9390c06", "query": "When testing a local rebuild in Fedora -- using rustc 1.26.0 as stage0 to build rustc 1.26.0 again -- I ran into this ICE: The \"slice index starts at 1 but ends at 0\" comes from here: With GDB I found that is completely empty! Stepping through from the very beginning, does get the right argc and argv, calling appropriately. But the addresses of and are not where accesses later from , so that just sees and . It turns out that the bootstrap was causing my to load the freshly-built libraries, rather than its own in . The bootstrap wrapper does try to set a libdir at the front of that path, but it uses , here , so that doesn't help. This only hits local-rebuild, because normal prior-release stage0 builds have different library metadata.\nIs this fixed? cc\nSorry, yes, fixed by .", "positive_passages": [{"docid": "doc-en-rust-455d3c934cdcc095a8dae7b3162c33e8cb9a6a88b485d76e2c8943772edeb940", "text": "// FIXME: Temporary fix for https://github.com/rust-lang/cargo/issues/3005 // Force cargo to output binaries with disambiguating hashes in the name cargo.env(\"__CARGO_DEFAULT_LIB_METADATA\", &self.config.channel); let metadata = if compiler.stage == 0 { // Treat stage0 like special channel, whether it's a normal prior- // release rustc or a local rebuild with the same version, so we // never mix these libraries by accident. \"bootstrap\" } else { &self.config.channel }; cargo.env(\"__CARGO_DEFAULT_LIB_METADATA\", &metadata); let stage; if compiler.stage == 0 && self.local_rebuild {", "commid": "rust_pr_50789"}], "negative_passages": []}
{"query_id": "q-en-rust-3ab25983fc6b01240f3f975c30df52d8e2b8b58602fa07a35a3c1d140ae8603d", "query": "For type names only, submodules can't apparently see non-pub types defined in the parent module. So, this code: ... generates this error:\nNow it at least tells you that is private: Though, it doesn't need to display the error twice and could do better than .\nlinking to for unified tracking of resolve\nClosing as a dupe of , this would be allowed with the rules in that bug.", "positive_passages": [{"docid": "doc-en-rust-2bef0edcdf7865127c0d209c39b604a30f1689341a38d38f4cbfc1ac522249a9", "text": " Subproject commit 8b7f7e667268921c278af94ae30a61e87a22b22b Subproject commit 329923edec41d0ddbea7f30ab12fca0436d459ae ", "commid": "rust_pr_69692"}], "negative_passages": []}
{"query_id": "q-en-rust-d08a597a6f1442c613d68fc673ac93e272370bf91300ee1d13952bf52364366f", "query": "This issue might be related to , which has been fixed in playground link: Backtrace:\nremoving the argument inside the closure gives a different ICE. Interestingly stable refuses to compile this, but beta and nightly ICE's playground: Backtrace:\nUPDATE: now also ICE's in stable.\nNow no longer ICE's since 1.32.\nThe ICE doesn't appear on the latest nightly via the above playground link, marked as E-needstest", "positive_passages": [{"docid": "doc-en-rust-5bddb5543d6f5ab8230be1cfadf1ad41d4d3bd205a6f7f8090a7f4a5ed8bc869", "text": " #![feature(const_raw_ptr_to_usize_cast)] fn main() { [(); &(static |x| {}) as *const _ as usize]; //~^ ERROR: closures cannot be static //~| ERROR: type annotations needed [(); &(static || {}) as *const _ as usize]; //~^ ERROR: closures cannot be static //~| ERROR: evaluation of constant value failed } ", "commid": "rust_pr_66331"}], "negative_passages": []}
{"query_id": "q-en-rust-d08a597a6f1442c613d68fc673ac93e272370bf91300ee1d13952bf52364366f", "query": "This issue might be related to , which has been fixed in playground link: Backtrace:\nremoving the argument inside the closure gives a different ICE. Interestingly stable refuses to compile this, but beta and nightly ICE's playground: Backtrace:\nUPDATE: now also ICE's in stable.\nNow no longer ICE's since 1.32.\nThe ICE doesn't appear on the latest nightly via the above playground link, marked as E-needstest", "positive_passages": [{"docid": "doc-en-rust-72554819f1e09f22a198a99bff5992474411083d7ca6fd279f1de1acee4146d5", "text": " error[E0697]: closures cannot be static --> $DIR/issue-52432.rs:4:12 | LL | [(); &(static |x| {}) as *const _ as usize]; | ^^^^^^^^^^ error[E0697]: closures cannot be static --> $DIR/issue-52432.rs:7:12 | LL | [(); &(static || {}) as *const _ as usize]; | ^^^^^^^^^ error[E0282]: type annotations needed --> $DIR/issue-52432.rs:4:20 | LL | [(); &(static |x| {}) as *const _ as usize]; | ^ consider giving this closure parameter a type error[E0080]: evaluation of constant value failed --> $DIR/issue-52432.rs:7:10 | LL | [(); &(static || {}) as *const _ as usize]; | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ \"pointer-to-integer cast\" needs an rfc before being allowed inside constants error: aborting due to 4 previous errors Some errors have detailed explanations: E0080, E0282, E0697. For more information about an error, try `rustc --explain E0080`. ", "commid": "rust_pr_66331"}], "negative_passages": []}
{"query_id": "q-en-rust-d08a597a6f1442c613d68fc673ac93e272370bf91300ee1d13952bf52364366f", "query": "This issue might be related to , which has been fixed in playground link: Backtrace:\nremoving the argument inside the closure gives a different ICE. Interestingly stable refuses to compile this, but beta and nightly ICE's playground: Backtrace:\nUPDATE: now also ICE's in stable.\nNow no longer ICE's since 1.32.\nThe ICE doesn't appear on the latest nightly via the above playground link, marked as E-needstest", "positive_passages": [{"docid": "doc-en-rust-724d6dc7cfbe8265036e53ed5f6630adf31768d81ecf2bbd8f5c263f4af2e870", "text": " // check-pass #![allow(dead_code)] trait Structure where S: Structure SeStr where S: Structure /// `Cursor`s are typically used with in-memory buffers to allow them to /// implement [`Read`] and/or [`Write`], allowing these buffers to be used /// anywhere you might use a reader or writer that does actual I/O. /// `Cursor`s are used with in-memory buffers, anything implementing /// `AsRef<[u8]>`, to allow them to implement [`Read`] and/or [`Write`], /// allowing these buffers to be used anywhere you might use a reader or writer /// that does actual I/O. /// /// The standard library implements some I/O traits on various types which /// are commonly used as a buffer, like `Cursor<`[`Vec`]` /// Creates a new cursor wrapping the provided underlying I/O object. /// Creates a new cursor wrapping the provided underlying in-memory buffer. /// /// Cursor initial position is `0` even if underlying object (e. /// g. `Vec`) is not empty. So writing to cursor starts with /// overwriting `Vec` content, not with appending to it. /// Cursor initial position is `0` even if underlying buffer (e.g. `Vec`) /// is not empty. So writing to cursor starts with overwriting `Vec` /// content, not with appending to it. /// /// # Examples ///", "commid": "rust_pr_52548"}], "negative_passages": []}
{"query_id": "q-en-rust-e9dca273549beedd5b5489e201dc989d085cd5755cece8fde4d09f1712232875", "query": "Reproduced on versions: - Isolated example to reproduce: Actual error: Mentioned suggestion about \"wrap into \" is incorrect at all. There we should deref the value from the . - - Expected error: (something like this)\nHere's where the suggestion gets issued: The motivation was to solve (where a -expression appears in tail position, which seemed likely to be a common mistake), but it turns out that there are other situations where we run into a type mismatch between match arms of a desugared expression. We regret the error.\nHere's another case of this:\nI see what's happening, we're only checking wether the error came from a desugaring, disregarding the types involved. It should be possible to check that information as well, even if the suggestion is only provided when it's a where everything but the error type matches. (I think it's ok if the supplied error cannot be converted to the expected error, as this suggestion exposes newcomers to a non-obvious construct at least.)\nClippy to detect this that we might want to uplift ...\n(But for now I'm just going to remove the suggestion.)", "positive_passages": [{"docid": "doc-en-rust-387b544c914de95389b319be1aeeb1b4522cfe26858749aa86ec2fba8dfe66c6", "text": "err.span_label(arm_span, msg); } } hir::MatchSource::TryDesugar => { // Issue #51632 if let Ok(try_snippet) = self.tcx.sess.source_map().span_to_snippet(arm_span) { err.span_suggestion_with_applicability( arm_span, \"try wrapping with a success variant\", format!(\"Ok({})\", try_snippet), Applicability::MachineApplicable, ); } } hir::MatchSource::TryDesugar => {} _ => { let msg = \"match arm with an incompatible type\"; if self.tcx.sess.source_map().is_multiline(arm_span) {", "commid": "rust_pr_55423"}], "negative_passages": []}
{"query_id": "q-en-rust-e9dca273549beedd5b5489e201dc989d085cd5755cece8fde4d09f1712232875", "query": "Reproduced on versions: - Isolated example to reproduce: Actual error: Mentioned suggestion about \"wrap into \" is incorrect at all. There we should deref the value from the . - - Expected error: (something like this)\nHere's where the suggestion gets issued: The motivation was to solve (where a -expression appears in tail position, which seemed likely to be a common mistake), but it turns out that there are other situations where we run into a type mismatch between match arms of a desugared expression. We regret the error.\nHere's another case of this:\nI see what's happening, we're only checking wether the error came from a desugaring, disregarding the types involved. It should be possible to check that information as well, even if the suggestion is only provided when it's a where everything but the error type matches. (I think it's ok if the supplied error cannot be converted to the expected error, as this suggestion exposes newcomers to a non-obvious construct at least.)\nClippy to detect this that we might want to uplift ...\n(But for now I'm just going to remove the suggestion.)", "positive_passages": [{"docid": "doc-en-rust-1e4985b5942246975c7be74a00b242e8c3492d4b3d8bdb2fd41296d89a828bf4", "text": " // Copyright 2018 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0 ", "commid": "rust_pr_55423"}], "negative_passages": []}
{"query_id": "q-en-rust-e9dca273549beedd5b5489e201dc989d085cd5755cece8fde4d09f1712232875", "query": "Reproduced on versions: - Isolated example to reproduce: Actual error: Mentioned suggestion about \"wrap into \" is incorrect at all. There we should deref the value from the . - - Expected error: (something like this)\nHere's where the suggestion gets issued: The motivation was to solve (where a -expression appears in tail position, which seemed likely to be a common mistake), but it turns out that there are other situations where we run into a type mismatch between match arms of a desugared expression. We regret the error.\nHere's another case of this:\nI see what's happening, we're only checking wether the error came from a desugaring, disregarding the types involved. It should be possible to check that information as well, even if the suggestion is only provided when it's a where everything but the error type matches. (I think it's ok if the supplied error cannot be converted to the expected error, as this suggestion exposes newcomers to a non-obvious construct at least.)\nClippy to detect this that we might want to uplift ...\n(But for now I'm just going to remove the suggestion.)", "positive_passages": [{"docid": "doc-en-rust-1feaf58101a17b5147829d1ea1aad9e4f0c635e3a34ff7a32f221616708e1c27", "text": "// option. This file may not be copied, modified, or distributed // except according to those terms. // run-rustfix #![allow(dead_code)] fn missing_discourses() -> Result //~| HELP try wrapping with a success variant } fn main() {}", "commid": "rust_pr_55423"}], "negative_passages": []}
{"query_id": "q-en-rust-e9dca273549beedd5b5489e201dc989d085cd5755cece8fde4d09f1712232875", "query": "Reproduced on versions: - Isolated example to reproduce: Actual error: Mentioned suggestion about \"wrap into \" is incorrect at all. There we should deref the value from the . - - Expected error: (something like this)\nHere's where the suggestion gets issued: The motivation was to solve (where a -expression appears in tail position, which seemed likely to be a common mistake), but it turns out that there are other situations where we run into a type mismatch between match arms of a desugared expression. We regret the error.\nHere's another case of this:\nI see what's happening, we're only checking wether the error came from a desugaring, disregarding the types involved. It should be possible to check that information as well, even if the suggestion is only provided when it's a where everything but the error type matches. (I think it's ok if the supplied error cannot be converted to the expected error, as this suggestion exposes newcomers to a non-obvious construct at least.)\nClippy to detect this that we might want to uplift ...\n(But for now I'm just going to remove the suggestion.)", "positive_passages": [{"docid": "doc-en-rust-d1bc2b172cb70190c65a14b5dfa6b929349378348910929ad014f2a551242d21", "text": "error[E0308]: try expression alternatives have incompatible types --> $DIR/issue-51632-try-desugar-incompatible-types.rs:20:5 --> $DIR/issue-51632-try-desugar-incompatible-types.rs:18:5 | LL | missing_discourses()? | ^^^^^^^^^^^^^^^^^^^^^ | | | expected enum `std::result::Result`, found isize | help: try wrapping with a success variant: `Ok(missing_discourses()?)` | ^^^^^^^^^^^^^^^^^^^^^ expected enum `std::result::Result`, found isize | = note: expected type `std::result::Result Subproject commit 52a6a4d7087d14a35d44a11c39c77fa79d71378d Subproject commit d549d85b1735dc5066b2973f8549557a813bb9c8 ", "commid": "rust_pr_52983"}], "negative_passages": []}
{"query_id": "q-en-rust-b134729004a99970bd0ebb29cdf9e30f7b25f01ba1acbda070f62cfbf3553ae9", "query": "LLVM 7.0 has been branched as of 2018-08-01, with the final tag probably a couple weeks away. The first release candidate will be tagged in a couple of days. is the llvm-dev announcement, and is a view of the commits on the branch. Once the final release has been tagged, Rust should be updated to LLVM 7.0.\nGreat! I've started our branches: I'm hoping that this'll be pretty smooth since we upgraded most of the way through the cycle! I'm testing locally now and will send a PR once green.", "positive_passages": [{"docid": "doc-en-rust-22a5e40badb1b6003ca1c24e2ec339ee53d50ed6d40326eb9c5cd6964b9ae7f3", "text": " Subproject commit 03684905101f0b7e49dfe530e54dc1aeac6ef0fb Subproject commit e19f07f5a6e5546ab4f6ea951e3c6b8627edeaa7 ", "commid": "rust_pr_52983"}], "negative_passages": []}
{"query_id": "q-en-rust-b134729004a99970bd0ebb29cdf9e30f7b25f01ba1acbda070f62cfbf3553ae9", "query": "LLVM 7.0 has been branched as of 2018-08-01, with the final tag probably a couple weeks away. The first release candidate will be tagged in a couple of days. is the llvm-dev announcement, and is a view of the commits on the branch. Once the final release has been tagged, Rust should be updated to LLVM 7.0.\nGreat! I've started our branches: I'm hoping that this'll be pretty smooth since we upgraded most of the way through the cycle! I'm testing locally now and will send a PR once green.", "positive_passages": [{"docid": "doc-en-rust-87641d3cfe5a484aa36a83a0fbf84b531b0529853af4fefdce2fe4982a28ca4b", "text": "# If this file is modified, then llvm will be (optionally) cleaned and then rebuilt. # The actual contents of this file do not matter, but to trigger a change on the # build bots then the contents should be changed so git updates the mtime. 2018-07-12 No newline at end of file 2018-08-02 ", "commid": "rust_pr_52983"}], "negative_passages": []}
{"query_id": "q-en-rust-b134729004a99970bd0ebb29cdf9e30f7b25f01ba1acbda070f62cfbf3553ae9", "query": "LLVM 7.0 has been branched as of 2018-08-01, with the final tag probably a couple weeks away. The first release candidate will be tagged in a couple of days. is the llvm-dev announcement, and is a view of the commits on the branch. Once the final release has been tagged, Rust should be updated to LLVM 7.0.\nGreat! I've started our branches: I'm hoping that this'll be pretty smooth since we upgraded most of the way through the cycle! I'm testing locally now and will send a PR once green.", "positive_passages": [{"docid": "doc-en-rust-0d1fbd9cf0143e1ef1401934c2ffd99bc8f0fb146bc57a1d4c2495b66fb7f7b1", "text": " Subproject commit 8214ccf861d538671b0a1436dbf4538dc4a64d09 Subproject commit f76ea3ca16ed22dde8ef929db74a4b4df6f2f899 ", "commid": "rust_pr_52983"}], "negative_passages": []}
{"query_id": "q-en-rust-a10ebf118771c08c5b9499c75b4940d17444dad36f993b7b4a374436e600f49f", "query": "a programming language that I am working on, and the VM is written in Rust. Up until Rust nightly 2018-08-17, everything works fine. Starting with the nightly from the 17th, I'm observing various crashes and different program behaviour. For example: On Windows it will either fail with a , or (more on this in a moment). On Linux . Note that the funny segfault output is because the command is started with Ruby, and Ruby installs its own segmentation fault handler. Locally it will usually fail with the same runtime error as observed in Windows above, but sometimes it will segfault. Sometimes it will panic because certain operations are performed using NULL pointers where this is not expected. The last nightly that did not suffer from these problems was Rust 2018-08-16. Stable Rust also works fine. When the segmentation faults happen, they are usually in different places. For example, for one segmentation fault the backtrace is as follows: 0x00007ffff7e12763 in intmalloc () from 0x00007ffff7e13ada in malloc () from 0x0000555555568e6b in alloc::alloc::alloc (layout=...) at .field(&self.ring) .field(&self.tail) .field(&self.head) .finish() .field(&front) .field(&back) .finish() } }", "commid": "rust_pr_53571"}], "negative_passages": []}
{"query_id": "q-en-rust-a10ebf118771c08c5b9499c75b4940d17444dad36f993b7b4a374436e600f49f", "query": "a programming language that I am working on, and the VM is written in Rust. Up until Rust nightly 2018-08-17, everything works fine. Starting with the nightly from the 17th, I'm observing various crashes and different program behaviour. For example: On Windows it will either fail with a , or (more on this in a moment). On Linux . Note that the funny segfault output is because the command is started with Ruby, and Ruby installs its own segmentation fault handler. Locally it will usually fail with the same runtime error as observed in Windows above, but sometimes it will segfault. Sometimes it will panic because certain operations are performed using NULL pointers where this is not expected. The last nightly that did not suffer from these problems was Rust 2018-08-16. Stable Rust also works fine. When the segmentation faults happen, they are usually in different places. For example, for one segmentation fault the backtrace is as follows: 0x00007ffff7e12763 in intmalloc () from 0x00007ffff7e13ada in malloc () from 0x0000555555568e6b in alloc::alloc::alloc (layout=...) at .field(&self.ring) .field(&self.tail) .field(&self.head) .finish() .field(&front) .field(&back) .finish() } }", "commid": "rust_pr_53571"}], "negative_passages": []}
{"query_id": "q-en-rust-a10ebf118771c08c5b9499c75b4940d17444dad36f993b7b4a374436e600f49f", "query": "a programming language that I am working on, and the VM is written in Rust. Up until Rust nightly 2018-08-17, everything works fine. Starting with the nightly from the 17th, I'm observing various crashes and different program behaviour. For example: On Windows it will either fail with a , or (more on this in a moment). On Linux . Note that the funny segfault output is because the command is started with Ruby, and Ruby installs its own segmentation fault handler. Locally it will usually fail with the same runtime error as observed in Windows above, but sometimes it will segfault. Sometimes it will panic because certain operations are performed using NULL pointers where this is not expected. The last nightly that did not suffer from these problems was Rust 2018-08-16. Stable Rust also works fine. When the segmentation faults happen, they are usually in different places. For example, for one segmentation fault the backtrace is as follows: 0x00007ffff7e12763 in intmalloc () from 0x00007ffff7e13ada in malloc () from 0x0000555555568e6b in alloc::alloc::alloc (layout=...) at // command specific path, we call clear_if_dirty with this let mut my_out = match cmd { \"build\" => self.cargo_out(compiler, mode, target), // This is the intended out directory for crate documentation. \"doc\" | \"rustdoc\" => self.crate_doc_out(target), _ => self.stage_out(compiler, mode), }; // This is for the original compiler, but if we're forced to use stage 1, then // std/test/rustc stamps won't exist in stage 2, so we need to get those from stage 1, since // we copy the libs forward. let cmp = self.compiler_for(compiler.stage, compiler.host, target); let libstd_stamp = match cmd { \"check\" | \"clippy\" | \"fix\" => check::libstd_stamp(self, cmp, target), _ => compile::libstd_stamp(self, cmp, target), }; let libtest_stamp = match cmd { \"check\" | \"clippy\" | \"fix\" => check::libtest_stamp(self, cmp, target), _ => compile::libtest_stamp(self, cmp, target), }; let librustc_stamp = match cmd { \"check\" | \"clippy\" | \"fix\" => check::librustc_stamp(self, cmp, target), _ => compile::librustc_stamp(self, cmp, target), }; // Codegen backends are not yet tracked by -Zbinary-dep-depinfo, // so we need to explicitly clear out if they've been updated. for backend in self.codegen_backends(compiler) { self.clear_if_dirty(&out_dir, &backend); } if cmd == \"doc\" || cmd == \"rustdoc\" { if mode == Mode::Rustc || mode == Mode::ToolRustc || mode == Mode::Codegen { let my_out = match mode { // This is the intended out directory for compiler documentation. my_out = self.compiler_doc_out(target); } Mode::Rustc | Mode::ToolRustc | Mode::Codegen => self.compiler_doc_out(target), _ => self.crate_doc_out(target), }; let rustdoc = self.rustdoc(compiler); self.clear_if_dirty(&my_out, &rustdoc); } else if cmd != \"test\" { match mode { Mode::Std => { self.clear_if_dirty(&my_out, &self.rustc(compiler)); for backend in self.codegen_backends(compiler) { self.clear_if_dirty(&my_out, &backend); } }, Mode::Test => { self.clear_if_dirty(&my_out, &libstd_stamp); }, Mode::Rustc => { self.clear_if_dirty(&my_out, &self.rustc(compiler)); self.clear_if_dirty(&my_out, &libstd_stamp); self.clear_if_dirty(&my_out, &libtest_stamp); }, Mode::Codegen => { self.clear_if_dirty(&my_out, &librustc_stamp); }, Mode::ToolBootstrap => { }, Mode::ToolStd => { self.clear_if_dirty(&my_out, &libstd_stamp); }, Mode::ToolTest => { self.clear_if_dirty(&my_out, &libstd_stamp); self.clear_if_dirty(&my_out, &libtest_stamp); }, Mode::ToolRustc => { self.clear_if_dirty(&my_out, &libstd_stamp); self.clear_if_dirty(&my_out, &libtest_stamp); self.clear_if_dirty(&my_out, &librustc_stamp); }, } } cargo", "commid": "rust_pr_63470"}], "negative_passages": []}
{"query_id": "q-en-rust-3570768bb001b4b48f7d19c70c45cba0e8f211904f7edb5d3119ff7e7fee7cfa", "query": "This also occurs with etc. Changing requires only the reverse dependencies of to be recompiled. is also recompiled. As you can see, this takes twice as much as compiling the reverse dependencies of alone, leading to compilation being 3x slower.", "positive_passages": [{"docid": "doc-en-rust-9d639d84c2f0fe53329d4b551bedabea3c30bcb4f1a8908092c0c607b18583a7", "text": "}, } // This tells Cargo (and in turn, rustc) to output more complete // dependency information. Most importantly for rustbuild, this // includes sysroot artifacts, like libstd, which means that we don't // need to track those in rustbuild (an error prone process!). This // feature is currently unstable as there may be some bugs and such, but // it represents a big improvement in rustbuild's reliability on // rebuilds, so we're using it here. // // For some additional context, see #63470 (the PR originally adding // this), as well as #63012 which is the tracking issue for this // feature on the rustc side. cargo.arg(\"-Zbinary-dep-depinfo\"); cargo.arg(\"-j\").arg(self.jobs().to_string()); // Remove make-related flags to ensure Cargo can correctly set things up cargo.env_remove(\"MAKEFLAGS\");", "commid": "rust_pr_63470"}], "negative_passages": []}
{"query_id": "q-en-rust-3570768bb001b4b48f7d19c70c45cba0e8f211904f7edb5d3119ff7e7fee7cfa", "query": "This also occurs with etc. Changing requires only the reverse dependencies of to be recompiled. is also recompiled. As you can see, this takes twice as much as compiling the reverse dependencies of alone, leading to compilation being 3x slower.", "positive_passages": [{"docid": "doc-en-rust-f515d4963da911b191f7faae357370e1be6dc99e30fd6ccb2209db0947d434f6", "text": "let libdir = builder.sysroot_libdir(compiler, target); let hostdir = builder.sysroot_libdir(compiler, compiler.host); add_to_sysroot(&builder, &libdir, &hostdir, &rustdoc_stamp(builder, compiler, target)); builder.cargo(compiler, Mode::ToolRustc, target, \"clean\"); } }", "commid": "rust_pr_63470"}], "negative_passages": []}
{"query_id": "q-en-rust-3570768bb001b4b48f7d19c70c45cba0e8f211904f7edb5d3119ff7e7fee7cfa", "query": "This also occurs with etc. Changing requires only the reverse dependencies of to be recompiled. is also recompiled. As you can see, this takes twice as much as compiling the reverse dependencies of alone, leading to compilation being 3x slower.", "positive_passages": [{"docid": "doc-en-rust-432d9631a7ef91d129d2cfaff55f38dadb459167bf99bc7744fd5bc512bfc321", "text": "use std::process::{Command, Stdio, exit}; use std::str; use build_helper::{output, mtime, t, up_to_date}; use build_helper::{output, t, up_to_date}; use filetime::FileTime; use serde::Deserialize; use serde_json;", "commid": "rust_pr_63470"}], "negative_passages": []}
{"query_id": "q-en-rust-3570768bb001b4b48f7d19c70c45cba0e8f211904f7edb5d3119ff7e7fee7cfa", "query": "This also occurs with etc. Changing requires only the reverse dependencies of to be recompiled. is also recompiled. As you can see, this takes twice as much as compiling the reverse dependencies of alone, leading to compilation being 3x slower.", "positive_passages": [{"docid": "doc-en-rust-093ad82386bc6f2c9804225303004b7374d3bd3f54cc107e31b82a46c2e29d8e", "text": "// for reason why the sanitizers are not built in stage0. copy_apple_sanitizer_dylibs(builder, &builder.native_dir(target), \"osx\", &libdir); } builder.cargo(target_compiler, Mode::ToolStd, target, \"clean\"); } }", "commid": "rust_pr_63470"}], "negative_passages": []}
{"query_id": "q-en-rust-3570768bb001b4b48f7d19c70c45cba0e8f211904f7edb5d3119ff7e7fee7cfa", "query": "This also occurs with etc. Changing requires only the reverse dependencies of to be recompiled. is also recompiled. As you can see, this takes twice as much as compiling the reverse dependencies of alone, leading to compilation being 3x slower.", "positive_passages": [{"docid": "doc-en-rust-4f02897f85d8e9bfdaeae6766e030d8555e33e4613fcd3fb04fedf0d0990a802", "text": "&builder.sysroot_libdir(target_compiler, compiler.host), &libtest_stamp(builder, compiler, target) ); builder.cargo(target_compiler, Mode::ToolTest, target, \"clean\"); } }", "commid": "rust_pr_63470"}], "negative_passages": []}
{"query_id": "q-en-rust-3570768bb001b4b48f7d19c70c45cba0e8f211904f7edb5d3119ff7e7fee7cfa", "query": "This also occurs with etc. Changing requires only the reverse dependencies of to be recompiled. is also recompiled. As you can see, this takes twice as much as compiling the reverse dependencies of alone, leading to compilation being 3x slower.", "positive_passages": [{"docid": "doc-en-rust-d2343b61ee6096e2e13dd91893e26549fb3a997a5898079b8bbabeb6dce96bb6", "text": "&builder.sysroot_libdir(target_compiler, compiler.host), &librustc_stamp(builder, compiler, target) ); builder.cargo(target_compiler, Mode::ToolRustc, target, \"clean\"); } }", "commid": "rust_pr_63470"}], "negative_passages": []}
{"query_id": "q-en-rust-3570768bb001b4b48f7d19c70c45cba0e8f211904f7edb5d3119ff7e7fee7cfa", "query": "This also occurs with etc. Changing requires only the reverse dependencies of to be recompiled. is also recompiled. As you can see, this takes twice as much as compiling the reverse dependencies of alone, leading to compilation being 3x slower.", "positive_passages": [{"docid": "doc-en-rust-d645c75ba90a4081207db3103439b1bcb0c9033b5d6299a205f1ce0b23845741", "text": "deps.push((path_to_add.into(), false)); } // Now we want to update the contents of the stamp file, if necessary. First // we read off the previous contents along with its mtime. If our new // contents (the list of files to copy) is different or if any dep's mtime // is newer then we rewrite the stamp file. deps.sort(); let stamp_contents = fs::read(stamp); let stamp_mtime = mtime(&stamp); let mut new_contents = Vec::new(); let mut max = None; let mut max_path = None; for (dep, proc_macro) in deps.iter() { let mtime = mtime(dep); if Some(mtime) > max { max = Some(mtime); max_path = Some(dep.clone()); } new_contents.extend(if *proc_macro { b\"h\" } else { b\"t\" }); new_contents.extend(dep.to_str().unwrap().as_bytes()); new_contents.extend(b\"0\"); } let max = max.unwrap(); let max_path = max_path.unwrap(); let contents_equal = stamp_contents .map(|contents| contents == new_contents) .unwrap_or_default(); if contents_equal && max <= stamp_mtime { builder.verbose(&format!(\"not updating {:?}; contents equal and {:?} <= {:?}\", stamp, max, stamp_mtime)); return deps.into_iter().map(|(d, _)| d).collect() } if max > stamp_mtime { builder.verbose(&format!(\"updating {:?} as {:?} changed\", stamp, max_path)); } else { builder.verbose(&format!(\"updating {:?} as deps changed\", stamp)); } t!(fs::write(&stamp, &new_contents)); deps.into_iter().map(|(d, _)| d).collect() }", "commid": "rust_pr_63470"}], "negative_passages": []}
{"query_id": "q-en-rust-7f49b450e2a2223e97ed4cd19c7b077d69e37d7dae4f440356ca46bd12ff0c1e", "query": "Discovered in it turns out that our vendored copies of these libraires cause problems when they override the system versions by accident. We should rename our copies with a Rust-specific name to avoid this name clash.", "positive_passages": [{"docid": "doc-en-rust-08b015ccef9e1d86e643ae2795da137a9a9befa893f54c699e36bc5bb405de7b", "text": "fn copy_apple_sanitizer_dylibs(builder: &Builder, native_dir: &Path, platform: &str, into: &Path) { for &sanitizer in &[\"asan\", \"tsan\"] { let filename = format!(\"libclang_rt.{}_{}_dynamic.dylib\", sanitizer, platform); let filename = format!(\"lib__rustc__clang_rt.{}_{}_dynamic.dylib\", sanitizer, platform); let mut src_path = native_dir.join(sanitizer); src_path.push(\"build\"); src_path.push(\"lib\");", "commid": "rust_pr_54681"}], "negative_passages": []}
{"query_id": "q-en-rust-7f49b450e2a2223e97ed4cd19c7b077d69e37d7dae4f440356ca46bd12ff0c1e", "query": "Discovered in it turns out that our vendored copies of these libraires cause problems when they override the system versions by accident. We should rename our copies with a Rust-specific name to avoid this name clash.", "positive_passages": [{"docid": "doc-en-rust-fd60ee6a2d366165852597c5b576b1bf05706a5d2f6e9eace6b8d6e26d2bc370", "text": "pub out_dir: PathBuf, } impl NativeLibBoilerplate { /// On OSX we don't want to ship the exact filename that compiler-rt builds. /// This conflicts with the system and ours is likely a wildly different /// version, so they can't be substituted. /// /// As a result, we rename it here but we need to also use /// `install_name_tool` on OSX to rename the commands listed inside of it to /// ensure it's linked against correctly. pub fn fixup_sanitizer_lib_name(&self, sanitizer_name: &str) { if env::var(\"TARGET\").unwrap() != \"x86_64-apple-darwin\" { return } let dir = self.out_dir.join(\"build/lib/darwin\"); let name = format!(\"clang_rt.{}_osx_dynamic\", sanitizer_name); let src = dir.join(&format!(\"lib{}.dylib\", name)); let new_name = format!(\"lib__rustc__{}.dylib\", name); let dst = dir.join(&new_name); println!(\"{} => {}\", src.display(), dst.display()); fs::rename(&src, &dst).unwrap(); let status = Command::new(\"install_name_tool\") .arg(\"-id\") .arg(format!(\"@rpath/{}\", new_name)) .arg(&dst) .status() .expect(\"failed to execute `install_name_tool`\"); assert!(status.success()); } } impl Drop for NativeLibBoilerplate { fn drop(&mut self) { if !thread::panicking() {", "commid": "rust_pr_54681"}], "negative_passages": []}
{"query_id": "q-en-rust-7f49b450e2a2223e97ed4cd19c7b077d69e37d7dae4f440356ca46bd12ff0c1e", "query": "Discovered in it turns out that our vendored copies of these libraires cause problems when they override the system versions by accident. We should rename our copies with a Rust-specific name to avoid this name clash.", "positive_passages": [{"docid": "doc-en-rust-fc2d1cc79a3e36439f96421a71dbb3eee1e368d3c04feedf30b8914249f22e80", "text": "pub fn sanitizer_lib_boilerplate(sanitizer_name: &str) -> Result<(NativeLibBoilerplate, String), ()> { let (link_name, search_path, dynamic) = match &*env::var(\"TARGET\").unwrap() { let (link_name, search_path, apple) = match &*env::var(\"TARGET\").unwrap() { \"x86_64-unknown-linux-gnu\" => ( format!(\"clang_rt.{}-x86_64\", sanitizer_name), \"build/lib/linux\",", "commid": "rust_pr_54681"}], "negative_passages": []}
{"query_id": "q-en-rust-7f49b450e2a2223e97ed4cd19c7b077d69e37d7dae4f440356ca46bd12ff0c1e", "query": "Discovered in it turns out that our vendored copies of these libraires cause problems when they override the system versions by accident. We should rename our copies with a Rust-specific name to avoid this name clash.", "positive_passages": [{"docid": "doc-en-rust-789f8f5a09448c1b0beef3310b69f75a5bdb297292e3300be506c2a6bea9a543", "text": "), _ => return Err(()), }; let to_link = if dynamic { format!(\"dylib={}\", link_name) let to_link = if apple { format!(\"dylib=__rustc__{}\", link_name) } else { format!(\"static={}\", link_name) };", "commid": "rust_pr_54681"}], "negative_passages": []}
{"query_id": "q-en-rust-7f49b450e2a2223e97ed4cd19c7b077d69e37d7dae4f440356ca46bd12ff0c1e", "query": "Discovered in it turns out that our vendored copies of these libraires cause problems when they override the system versions by accident. We should rename our copies with a Rust-specific name to avoid this name clash.", "positive_passages": [{"docid": "doc-en-rust-e6320ba767747de6125367819649b8366dcef86ae9a1dca2fa225e3d7bb8861d", "text": ".out_dir(&native.out_dir) .build_target(&target) .build(); native.fixup_sanitizer_lib_name(\"asan\"); } println!(\"cargo:rerun-if-env-changed=LLVM_CONFIG\"); }", "commid": "rust_pr_54681"}], "negative_passages": []}
{"query_id": "q-en-rust-7f49b450e2a2223e97ed4cd19c7b077d69e37d7dae4f440356ca46bd12ff0c1e", "query": "Discovered in it turns out that our vendored copies of these libraires cause problems when they override the system versions by accident. We should rename our copies with a Rust-specific name to avoid this name clash.", "positive_passages": [{"docid": "doc-en-rust-3025cf2f433f59e58f399270a38a30acc5948098e7298f566809e886ae45ebae", "text": ".out_dir(&native.out_dir) .build_target(&target) .build(); native.fixup_sanitizer_lib_name(\"tsan\"); } println!(\"cargo:rerun-if-env-changed=LLVM_CONFIG\"); }", "commid": "rust_pr_54681"}], "negative_passages": []}
{"query_id": "q-en-rust-5cc3c7a26b374dc4156df6841e15f47e5da48b4efbd566a708f74578160ee9c9", "query": "std::io::Seek I got confused by this line and over on so after a little discussion it seems like this might be better wording for those of us who are silly A seek beyond the end of a stream is allowed, but it is implementation-defined. The hyphen makes it clear \u201cimplementation defined\u201d is a known concept (and matches it\u2019s usage elsewhere) and the \u201cit is\u201d makes it a bit more clear that the full sentence is considered and intentional. While normally I find brevity improves clarity, in this case if you don\u2019t realize \u201cimplementation defined\u201d is a single concept, it sounds like a thought that trailed off after an edit\nThanks I know other projects I've worked with have Easy and Documentation as tags so I wasn't sure if you guys might do the same\nYep, they're just actual labels over on the right ;-) (I also tabbed away and forgot to put the easy label on immediately )\nDon't worry about that! It's very easy for developers to misjudge whether such descriptions will make sense to the target users. Pointing these things out is a great way to contribute to documentation.", "positive_passages": [{"docid": "doc-en-rust-e40906f5dccabed66d0773dbc8fa0338e6992883a4df3eee4a735fa15cc6b2cc", "text": "pub trait Seek { /// Seek to an offset, in bytes, in a stream. /// /// A seek beyond the end of a stream is allowed, but implementation /// defined. /// A seek beyond the end of a stream is allowed, but behavior is defined /// by the implementation. /// /// If the seek operation completed successfully, /// this method returns the new position from the start of the stream.", "commid": "rust_pr_54635"}], "negative_passages": []}
{"query_id": "q-en-rust-80e8e973ae0b6e82394c8853b4f5876075770e920ec6dd8902244a2de8eea5d4", "query": "Compiling code like: results in: presumably because attempts to translate it into an x86-specific ABI, which the AArch64 backend rightly rejects. already ignores for non-x86 targets: I'm not completely sure what the right answer for Rust is. I think is the rough equivalent of the above clang bit and needs to take a few more cases into account for non-x86 Windows: does that sound right? If so, I'll code up a patch.\nSounds like a good place to me!", "positive_passages": [{"docid": "doc-en-rust-55f50da60459e31616d2f7f05340470c69c41cbb12b9f7b3402f2754a3d84552", "text": "} impl Target { /// Given a function ABI, turn \"System\" into the correct ABI for this target. /// Given a function ABI, turn it into the correct ABI for this target. pub fn adjust_abi(&self, abi: Abi) -> Abi { match abi { Abi::System => {", "commid": "rust_pr_54576"}], "negative_passages": []}
{"query_id": "q-en-rust-80e8e973ae0b6e82394c8853b4f5876075770e920ec6dd8902244a2de8eea5d4", "query": "Compiling code like: results in: presumably because attempts to translate it into an x86-specific ABI, which the AArch64 backend rightly rejects. already ignores for non-x86 targets: I'm not completely sure what the right answer for Rust is. I think is the rough equivalent of the above clang bit and needs to take a few more cases into account for non-x86 Windows: does that sound right? If so, I'll code up a patch.\nSounds like a good place to me!", "positive_passages": [{"docid": "doc-en-rust-b222d6c566648b1e54c6e450a4f4afcb5bae2271461646efde5a61659936a369", "text": "Abi::C } }, // These ABI kinds are ignored on non-x86 Windows targets. // See https://docs.microsoft.com/en-us/cpp/cpp/argument-passing-and-naming-conventions // and the individual pages for __stdcall et al. Abi::Stdcall | Abi::Fastcall | Abi::Vectorcall | Abi::Thiscall => { if self.options.is_like_windows && self.arch != \"x86\" { Abi::C } else { abi } }, abi => abi } }", "commid": "rust_pr_54576"}], "negative_passages": []}
{"query_id": "q-en-rust-80e8e973ae0b6e82394c8853b4f5876075770e920ec6dd8902244a2de8eea5d4", "query": "Compiling code like: results in: presumably because attempts to translate it into an x86-specific ABI, which the AArch64 backend rightly rejects. already ignores for non-x86 targets: I'm not completely sure what the right answer for Rust is. I think is the rough equivalent of the above clang bit and needs to take a few more cases into account for non-x86 Windows: does that sound right? If so, I'll code up a patch.\nSounds like a good place to me!", "positive_passages": [{"docid": "doc-en-rust-197959355bcf33c8b60ebe2f3099b1938a53d26e02d0c691012f7da90ed350a6", "text": "// option. This file may not be copied, modified, or distributed // except according to those terms. // ignore-arm // ignore-aarch64 // Test that `extern \"stdcall\"` is properly translated. // only-x86 // compile-flags: -C no-prepopulate-passes", "commid": "rust_pr_54576"}], "negative_passages": []}
{"query_id": "q-en-rust-47547d64a0fcb212ce4bff5118eea027d90cf4f1bc69020078db3e4cb242cc3b", "query": "Because LLVM doesn't know what is, we currently , relying on the function. So if the current CPU's family is Haswell, we convert to . As noticed and , some Intel Pentiums belong to the Haswell microarch, but they lack AVX.
let features = attributes::llvm_target_features(sess).collect::
, ); pub fn LLVMGetHostCPUFeatures() -> *mut c_char; pub fn LLVMDisposeMessage(message: *mut c_char); // Stuff that's in llvm-wrapper/ because it's not upstream yet. /// Opens an object file.", "commid": "rust_pr_80749"}], "negative_passages": []}
{"query_id": "q-en-rust-47547d64a0fcb212ce4bff5118eea027d90cf4f1bc69020078db3e4cb242cc3b", "query": "Because LLVM doesn't know what is, we currently , relying on the function. So if the current CPU's family is Haswell, we convert to . As noticed and , some Intel Pentiums belong to the Haswell microarch, but they lack AVX.
use std::ffi::CString; use std::ffi::{CStr, CString}; use std::slice; use std::str;", "commid": "rust_pr_80749"}], "negative_passages": []}
{"query_id": "q-en-rust-47547d64a0fcb212ce4bff5118eea027d90cf4f1bc69020078db3e4cb242cc3b", "query": "Because LLVM doesn't know what is, we currently , relying on the function. So if the current CPU's family is Haswell, we convert to . As noticed and , some Intel Pentiums belong to the Haswell microarch, but they lack AVX.
pub fn handle_native_features(sess: &Session) -> Vec (ty::RegionKind::ReLateBound(_, _), _) => { (ty::RegionKind::ReLateBound(_, _), _) | (_, ty::RegionKind::ReVar(_)) => { // One of these is true: // The new predicate has a HRTB in a spot where the old // predicate does not (if they both had a HRTB, the previous // match arm would have executed). // match arm would have executed). A HRBT is a 'stricter' // bound than anything else, so we want to keep the newer // predicate (with the HRBT) in place of the old predicate. // // The means we want to remove the older predicate from // user_computed_preds, since having both it and the new // OR // // The old predicate has a region variable where the new // predicate has some other kind of region. An region // variable isn't something we can actually display to a user, // so we choose ther new predicate (which doesn't have a region // varaible). // // In both cases, we want to remove the old predicate, // from user_computed_preds, and replace it with the new // one. Having both the old and the new // predicate in a ParamEnv would confuse SelectionContext // // We're currently in the predicate passed to 'retain', // so we return 'false' to remove the old predicate from // user_computed_preds return false; } (_, ty::RegionKind::ReLateBound(_, _)) => { // This is the opposite situation as the previous arm - the // old predicate has a HRTB lifetime in a place where the // new predicate does not. We want to leave the old (_, ty::RegionKind::ReLateBound(_, _)) | (ty::RegionKind::ReVar(_), _) => { // This is the opposite situation as the previous arm. // One of these is true: // // The old predicate has a HRTB lifetime in a place where the // new predicate does not. // // OR // // The new predicate has a region variable where the old // predicate has some other type of region. // // We want to leave the old // predicate in user_computed_preds, and skip adding // new_pred to user_computed_params. should_add_new = false } }, _ => {} } }", "commid": "rust_pr_55453"}], "negative_passages": []}
{"query_id": "q-en-rust-d16736a830399247ea4d437e19f9b3ecbcf87e76343d3c5ddcf152ee71be61e0", "query": "Rust version: How to reproduce: Error:\nMinimized reproduction:\nI'd like to work on this.", "positive_passages": [{"docid": "doc-en-rust-36f21e0600476ef1ca4bc71bcf10a12c311718524b32d7973b4c0e3ebc53690e", "text": " // Copyright 2018 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0 // command specific path, we call clear_if_dirty with this let mut my_out = match cmd { \"build\" => self.cargo_out(compiler, mode, target), // This is the intended out directory for crate documentation. \"doc\" | \"rustdoc\" => self.crate_doc_out(target), _ => self.stage_out(compiler, mode), }; // This is for the original compiler, but if we're forced to use stage 1, then // std/test/rustc stamps won't exist in stage 2, so we need to get those from stage 1, since // we copy the libs forward. let cmp = self.compiler_for(compiler.stage, compiler.host, target); let libstd_stamp = match cmd { \"check\" | \"clippy\" | \"fix\" => check::libstd_stamp(self, cmp, target), _ => compile::libstd_stamp(self, cmp, target), }; let libtest_stamp = match cmd { \"check\" | \"clippy\" | \"fix\" => check::libtest_stamp(self, cmp, target), _ => compile::libtest_stamp(self, cmp, target), }; let librustc_stamp = match cmd { \"check\" | \"clippy\" | \"fix\" => check::librustc_stamp(self, cmp, target), _ => compile::librustc_stamp(self, cmp, target), }; // Codegen backends are not yet tracked by -Zbinary-dep-depinfo, // so we need to explicitly clear out if they've been updated. for backend in self.codegen_backends(compiler) { self.clear_if_dirty(&out_dir, &backend); } if cmd == \"doc\" || cmd == \"rustdoc\" { if mode == Mode::Rustc || mode == Mode::ToolRustc || mode == Mode::Codegen { let my_out = match mode { // This is the intended out directory for compiler documentation. my_out = self.compiler_doc_out(target); } Mode::Rustc | Mode::ToolRustc | Mode::Codegen => self.compiler_doc_out(target), _ => self.crate_doc_out(target), }; let rustdoc = self.rustdoc(compiler); self.clear_if_dirty(&my_out, &rustdoc); } else if cmd != \"test\" { match mode { Mode::Std => { self.clear_if_dirty(&my_out, &self.rustc(compiler)); for backend in self.codegen_backends(compiler) { self.clear_if_dirty(&my_out, &backend); } }, Mode::Test => { self.clear_if_dirty(&my_out, &libstd_stamp); }, Mode::Rustc => { self.clear_if_dirty(&my_out, &self.rustc(compiler)); self.clear_if_dirty(&my_out, &libstd_stamp); self.clear_if_dirty(&my_out, &libtest_stamp); }, Mode::Codegen => { self.clear_if_dirty(&my_out, &librustc_stamp); }, Mode::ToolBootstrap => { }, Mode::ToolStd => { self.clear_if_dirty(&my_out, &libstd_stamp); }, Mode::ToolTest => { self.clear_if_dirty(&my_out, &libstd_stamp); self.clear_if_dirty(&my_out, &libtest_stamp); }, Mode::ToolRustc => { self.clear_if_dirty(&my_out, &libstd_stamp); self.clear_if_dirty(&my_out, &libtest_stamp); self.clear_if_dirty(&my_out, &librustc_stamp); }, } } cargo", "commid": "rust_pr_63470"}], "negative_passages": []}
{"query_id": "q-en-rust-bfa412db247ed5d4b7b0881a683a52b44b134b94e31c20369a7f564e4858e245", "query": "That is, appears to lose incremental artifacts. cc\nYes, I believe this is because rustc/cargo will otherwise error on \"found a new std\" or something like that? I don't quite see how incremental is different from the normal compilation here?\nincremental would still work even if dependencies changed. I think the fundamental problem here is that deletes everything, whereas (which, IIRC, does not remove incremental artifacts by default) would be appropriate.\nHm, I thought was equivalent to essentially -- could you clarify that? Would it be better to use cargo clean in rustbuild?\nRegardless of what does I don't think we should execute it here. Rustbuild is overly conservative in what it deletes, and it can get away probably with just deleting the folder instead of the entire folder. That would leave around the incremental cache. I believe, however, that is like\nMaybe the benchmarks I remember having for incremental had the incremental dir elsewhere than . I sure was confused by it, since I'd expect to remove all artifacts. If it does do that, then I agree should just remove a subset of the contents of target dirs.\nLooking at the code, these calls appear to be relevant: The problem is that the directory refers to is non-trivially structured. Instead of removing , we need to remove all the files (or at least the ones that are artifacts) and the directory in .\nI keep hitting this, specifically it keeps rebuilding and , every time I change rustc, which cause them to take much longer than any of the rustc crates, and this issue has become my main bottleneck (now that I'm building on a machine).\nNominating for discussion (specifically, allocating resources towards this) at the next relevant team meeting (tagged both and as I'm not sure which has \"jurisdiction\" here).\nIt'd be helpful to know how much rustbuild can \"trust\" the compiler here. Can we keep incremental artifact directories around from previous versions of the compiler?\nThe compiler can only work with incremental artifacts that it created itself. For any other version of the compiler it has to assume that it's incompatible. It usually just ignores such incompatible artifacts, but in the bootstrapping scenario, it might not always be smart enough to do that.\nI think the question here is will rustc's incremental artifacts correctly deal with crates changing (including adding and removing crates) in the sysroot. If so, we could keep the folder around. Cargo assumes the sysroot is unchanging, which is why we remove the directory in the first place.\nIncremental doesn't care about vs , so I'd expect it to be pretty resilient here.\ntriage: P-medium. (this inefficiency in incremental comp. would be nice to resolve, but is not P-high IMO.)", "positive_passages": [{"docid": "doc-en-rust-9d639d84c2f0fe53329d4b551bedabea3c30bcb4f1a8908092c0c607b18583a7", "text": "}, } // This tells Cargo (and in turn, rustc) to output more complete // dependency information. Most importantly for rustbuild, this // includes sysroot artifacts, like libstd, which means that we don't // need to track those in rustbuild (an error prone process!). This // feature is currently unstable as there may be some bugs and such, but // it represents a big improvement in rustbuild's reliability on // rebuilds, so we're using it here. // // For some additional context, see #63470 (the PR originally adding // this), as well as #63012 which is the tracking issue for this // feature on the rustc side. cargo.arg(\"-Zbinary-dep-depinfo\"); cargo.arg(\"-j\").arg(self.jobs().to_string()); // Remove make-related flags to ensure Cargo can correctly set things up cargo.env_remove(\"MAKEFLAGS\");", "commid": "rust_pr_63470"}], "negative_passages": []}
{"query_id": "q-en-rust-bfa412db247ed5d4b7b0881a683a52b44b134b94e31c20369a7f564e4858e245", "query": "That is, appears to lose incremental artifacts. cc\nYes, I believe this is because rustc/cargo will otherwise error on \"found a new std\" or something like that? I don't quite see how incremental is different from the normal compilation here?\nincremental would still work even if dependencies changed. I think the fundamental problem here is that deletes everything, whereas (which, IIRC, does not remove incremental artifacts by default) would be appropriate.\nHm, I thought was equivalent to essentially -- could you clarify that? Would it be better to use cargo clean in rustbuild?\nRegardless of what does I don't think we should execute it here. Rustbuild is overly conservative in what it deletes, and it can get away probably with just deleting the folder instead of the entire folder. That would leave around the incremental cache. I believe, however, that is like\nMaybe the benchmarks I remember having for incremental had the incremental dir elsewhere than . I sure was confused by it, since I'd expect to remove all artifacts. If it does do that, then I agree should just remove a subset of the contents of target dirs.\nLooking at the code, these calls appear to be relevant: The problem is that the directory refers to is non-trivially structured. Instead of removing , we need to remove all the files (or at least the ones that are artifacts) and the directory in .\nI keep hitting this, specifically it keeps rebuilding and , every time I change rustc, which cause them to take much longer than any of the rustc crates, and this issue has become my main bottleneck (now that I'm building on a machine).\nNominating for discussion (specifically, allocating resources towards this) at the next relevant team meeting (tagged both and as I'm not sure which has \"jurisdiction\" here).\nIt'd be helpful to know how much rustbuild can \"trust\" the compiler here. Can we keep incremental artifact directories around from previous versions of the compiler?\nThe compiler can only work with incremental artifacts that it created itself. For any other version of the compiler it has to assume that it's incompatible. It usually just ignores such incompatible artifacts, but in the bootstrapping scenario, it might not always be smart enough to do that.\nI think the question here is will rustc's incremental artifacts correctly deal with crates changing (including adding and removing crates) in the sysroot. If so, we could keep the folder around. Cargo assumes the sysroot is unchanging, which is why we remove the directory in the first place.\nIncremental doesn't care about vs , so I'd expect it to be pretty resilient here.\ntriage: P-medium. (this inefficiency in incremental comp. would be nice to resolve, but is not P-high IMO.)", "positive_passages": [{"docid": "doc-en-rust-f515d4963da911b191f7faae357370e1be6dc99e30fd6ccb2209db0947d434f6", "text": "let libdir = builder.sysroot_libdir(compiler, target); let hostdir = builder.sysroot_libdir(compiler, compiler.host); add_to_sysroot(&builder, &libdir, &hostdir, &rustdoc_stamp(builder, compiler, target)); builder.cargo(compiler, Mode::ToolRustc, target, \"clean\"); } }", "commid": "rust_pr_63470"}], "negative_passages": []}
{"query_id": "q-en-rust-bfa412db247ed5d4b7b0881a683a52b44b134b94e31c20369a7f564e4858e245", "query": "That is, appears to lose incremental artifacts. cc\nYes, I believe this is because rustc/cargo will otherwise error on \"found a new std\" or something like that? I don't quite see how incremental is different from the normal compilation here?\nincremental would still work even if dependencies changed. I think the fundamental problem here is that deletes everything, whereas (which, IIRC, does not remove incremental artifacts by default) would be appropriate.\nHm, I thought was equivalent to essentially -- could you clarify that? Would it be better to use cargo clean in rustbuild?\nRegardless of what does I don't think we should execute it here. Rustbuild is overly conservative in what it deletes, and it can get away probably with just deleting the folder instead of the entire folder. That would leave around the incremental cache. I believe, however, that is like\nMaybe the benchmarks I remember having for incremental had the incremental dir elsewhere than . I sure was confused by it, since I'd expect to remove all artifacts. If it does do that, then I agree should just remove a subset of the contents of target dirs.\nLooking at the code, these calls appear to be relevant: The problem is that the directory refers to is non-trivially structured. Instead of removing , we need to remove all the files (or at least the ones that are artifacts) and the directory in .\nI keep hitting this, specifically it keeps rebuilding and , every time I change rustc, which cause them to take much longer than any of the rustc crates, and this issue has become my main bottleneck (now that I'm building on a machine).\nNominating for discussion (specifically, allocating resources towards this) at the next relevant team meeting (tagged both and as I'm not sure which has \"jurisdiction\" here).\nIt'd be helpful to know how much rustbuild can \"trust\" the compiler here. Can we keep incremental artifact directories around from previous versions of the compiler?\nThe compiler can only work with incremental artifacts that it created itself. For any other version of the compiler it has to assume that it's incompatible. It usually just ignores such incompatible artifacts, but in the bootstrapping scenario, it might not always be smart enough to do that.\nI think the question here is will rustc's incremental artifacts correctly deal with crates changing (including adding and removing crates) in the sysroot. If so, we could keep the folder around. Cargo assumes the sysroot is unchanging, which is why we remove the directory in the first place.\nIncremental doesn't care about vs , so I'd expect it to be pretty resilient here.\ntriage: P-medium. (this inefficiency in incremental comp. would be nice to resolve, but is not P-high IMO.)", "positive_passages": [{"docid": "doc-en-rust-432d9631a7ef91d129d2cfaff55f38dadb459167bf99bc7744fd5bc512bfc321", "text": "use std::process::{Command, Stdio, exit}; use std::str; use build_helper::{output, mtime, t, up_to_date}; use build_helper::{output, t, up_to_date}; use filetime::FileTime; use serde::Deserialize; use serde_json;", "commid": "rust_pr_63470"}], "negative_passages": []}
{"query_id": "q-en-rust-bfa412db247ed5d4b7b0881a683a52b44b134b94e31c20369a7f564e4858e245", "query": "That is, appears to lose incremental artifacts. cc\nYes, I believe this is because rustc/cargo will otherwise error on \"found a new std\" or something like that? I don't quite see how incremental is different from the normal compilation here?\nincremental would still work even if dependencies changed. I think the fundamental problem here is that deletes everything, whereas (which, IIRC, does not remove incremental artifacts by default) would be appropriate.\nHm, I thought was equivalent to essentially -- could you clarify that? Would it be better to use cargo clean in rustbuild?\nRegardless of what does I don't think we should execute it here. Rustbuild is overly conservative in what it deletes, and it can get away probably with just deleting the folder instead of the entire folder. That would leave around the incremental cache. I believe, however, that is like\nMaybe the benchmarks I remember having for incremental had the incremental dir elsewhere than . I sure was confused by it, since I'd expect to remove all artifacts. If it does do that, then I agree should just remove a subset of the contents of target dirs.\nLooking at the code, these calls appear to be relevant: The problem is that the directory refers to is non-trivially structured. Instead of removing , we need to remove all the files (or at least the ones that are artifacts) and the directory in .\nI keep hitting this, specifically it keeps rebuilding and , every time I change rustc, which cause them to take much longer than any of the rustc crates, and this issue has become my main bottleneck (now that I'm building on a machine).\nNominating for discussion (specifically, allocating resources towards this) at the next relevant team meeting (tagged both and as I'm not sure which has \"jurisdiction\" here).\nIt'd be helpful to know how much rustbuild can \"trust\" the compiler here. Can we keep incremental artifact directories around from previous versions of the compiler?\nThe compiler can only work with incremental artifacts that it created itself. For any other version of the compiler it has to assume that it's incompatible. It usually just ignores such incompatible artifacts, but in the bootstrapping scenario, it might not always be smart enough to do that.\nI think the question here is will rustc's incremental artifacts correctly deal with crates changing (including adding and removing crates) in the sysroot. If so, we could keep the folder around. Cargo assumes the sysroot is unchanging, which is why we remove the directory in the first place.\nIncremental doesn't care about vs , so I'd expect it to be pretty resilient here.\ntriage: P-medium. (this inefficiency in incremental comp. would be nice to resolve, but is not P-high IMO.)", "positive_passages": [{"docid": "doc-en-rust-093ad82386bc6f2c9804225303004b7374d3bd3f54cc107e31b82a46c2e29d8e", "text": "// for reason why the sanitizers are not built in stage0. copy_apple_sanitizer_dylibs(builder, &builder.native_dir(target), \"osx\", &libdir); } builder.cargo(target_compiler, Mode::ToolStd, target, \"clean\"); } }", "commid": "rust_pr_63470"}], "negative_passages": []}
{"query_id": "q-en-rust-bfa412db247ed5d4b7b0881a683a52b44b134b94e31c20369a7f564e4858e245", "query": "That is, appears to lose incremental artifacts. cc\nYes, I believe this is because rustc/cargo will otherwise error on \"found a new std\" or something like that? I don't quite see how incremental is different from the normal compilation here?\nincremental would still work even if dependencies changed. I think the fundamental problem here is that deletes everything, whereas (which, IIRC, does not remove incremental artifacts by default) would be appropriate.\nHm, I thought was equivalent to essentially -- could you clarify that? Would it be better to use cargo clean in rustbuild?\nRegardless of what does I don't think we should execute it here. Rustbuild is overly conservative in what it deletes, and it can get away probably with just deleting the folder instead of the entire folder. That would leave around the incremental cache. I believe, however, that is like\nMaybe the benchmarks I remember having for incremental had the incremental dir elsewhere than . I sure was confused by it, since I'd expect to remove all artifacts. If it does do that, then I agree should just remove a subset of the contents of target dirs.\nLooking at the code, these calls appear to be relevant: The problem is that the directory refers to is non-trivially structured. Instead of removing , we need to remove all the files (or at least the ones that are artifacts) and the directory in .\nI keep hitting this, specifically it keeps rebuilding and , every time I change rustc, which cause them to take much longer than any of the rustc crates, and this issue has become my main bottleneck (now that I'm building on a machine).\nNominating for discussion (specifically, allocating resources towards this) at the next relevant team meeting (tagged both and as I'm not sure which has \"jurisdiction\" here).\nIt'd be helpful to know how much rustbuild can \"trust\" the compiler here. Can we keep incremental artifact directories around from previous versions of the compiler?\nThe compiler can only work with incremental artifacts that it created itself. For any other version of the compiler it has to assume that it's incompatible. It usually just ignores such incompatible artifacts, but in the bootstrapping scenario, it might not always be smart enough to do that.\nI think the question here is will rustc's incremental artifacts correctly deal with crates changing (including adding and removing crates) in the sysroot. If so, we could keep the folder around. Cargo assumes the sysroot is unchanging, which is why we remove the directory in the first place.\nIncremental doesn't care about vs , so I'd expect it to be pretty resilient here.\ntriage: P-medium. (this inefficiency in incremental comp. would be nice to resolve, but is not P-high IMO.)", "positive_passages": [{"docid": "doc-en-rust-4f02897f85d8e9bfdaeae6766e030d8555e33e4613fcd3fb04fedf0d0990a802", "text": "&builder.sysroot_libdir(target_compiler, compiler.host), &libtest_stamp(builder, compiler, target) ); builder.cargo(target_compiler, Mode::ToolTest, target, \"clean\"); } }", "commid": "rust_pr_63470"}], "negative_passages": []}
{"query_id": "q-en-rust-bfa412db247ed5d4b7b0881a683a52b44b134b94e31c20369a7f564e4858e245", "query": "That is, appears to lose incremental artifacts. cc\nYes, I believe this is because rustc/cargo will otherwise error on \"found a new std\" or something like that? I don't quite see how incremental is different from the normal compilation here?\nincremental would still work even if dependencies changed. I think the fundamental problem here is that deletes everything, whereas (which, IIRC, does not remove incremental artifacts by default) would be appropriate.\nHm, I thought was equivalent to essentially -- could you clarify that? Would it be better to use cargo clean in rustbuild?\nRegardless of what does I don't think we should execute it here. Rustbuild is overly conservative in what it deletes, and it can get away probably with just deleting the folder instead of the entire folder. That would leave around the incremental cache. I believe, however, that is like\nMaybe the benchmarks I remember having for incremental had the incremental dir elsewhere than . I sure was confused by it, since I'd expect to remove all artifacts. If it does do that, then I agree should just remove a subset of the contents of target dirs.\nLooking at the code, these calls appear to be relevant: The problem is that the directory refers to is non-trivially structured. Instead of removing , we need to remove all the files (or at least the ones that are artifacts) and the directory in .\nI keep hitting this, specifically it keeps rebuilding and , every time I change rustc, which cause them to take much longer than any of the rustc crates, and this issue has become my main bottleneck (now that I'm building on a machine).\nNominating for discussion (specifically, allocating resources towards this) at the next relevant team meeting (tagged both and as I'm not sure which has \"jurisdiction\" here).\nIt'd be helpful to know how much rustbuild can \"trust\" the compiler here. Can we keep incremental artifact directories around from previous versions of the compiler?\nThe compiler can only work with incremental artifacts that it created itself. For any other version of the compiler it has to assume that it's incompatible. It usually just ignores such incompatible artifacts, but in the bootstrapping scenario, it might not always be smart enough to do that.\nI think the question here is will rustc's incremental artifacts correctly deal with crates changing (including adding and removing crates) in the sysroot. If so, we could keep the folder around. Cargo assumes the sysroot is unchanging, which is why we remove the directory in the first place.\nIncremental doesn't care about vs , so I'd expect it to be pretty resilient here.\ntriage: P-medium. (this inefficiency in incremental comp. would be nice to resolve, but is not P-high IMO.)", "positive_passages": [{"docid": "doc-en-rust-d2343b61ee6096e2e13dd91893e26549fb3a997a5898079b8bbabeb6dce96bb6", "text": "&builder.sysroot_libdir(target_compiler, compiler.host), &librustc_stamp(builder, compiler, target) ); builder.cargo(target_compiler, Mode::ToolRustc, target, \"clean\"); } }", "commid": "rust_pr_63470"}], "negative_passages": []}
{"query_id": "q-en-rust-bfa412db247ed5d4b7b0881a683a52b44b134b94e31c20369a7f564e4858e245", "query": "That is, appears to lose incremental artifacts. cc\nYes, I believe this is because rustc/cargo will otherwise error on \"found a new std\" or something like that? I don't quite see how incremental is different from the normal compilation here?\nincremental would still work even if dependencies changed. I think the fundamental problem here is that deletes everything, whereas (which, IIRC, does not remove incremental artifacts by default) would be appropriate.\nHm, I thought was equivalent to essentially -- could you clarify that? Would it be better to use cargo clean in rustbuild?\nRegardless of what does I don't think we should execute it here. Rustbuild is overly conservative in what it deletes, and it can get away probably with just deleting the folder instead of the entire folder. That would leave around the incremental cache. I believe, however, that is like\nMaybe the benchmarks I remember having for incremental had the incremental dir elsewhere than . I sure was confused by it, since I'd expect to remove all artifacts. If it does do that, then I agree should just remove a subset of the contents of target dirs.\nLooking at the code, these calls appear to be relevant: The problem is that the directory refers to is non-trivially structured. Instead of removing , we need to remove all the files (or at least the ones that are artifacts) and the directory in .\nI keep hitting this, specifically it keeps rebuilding and , every time I change rustc, which cause them to take much longer than any of the rustc crates, and this issue has become my main bottleneck (now that I'm building on a machine).\nNominating for discussion (specifically, allocating resources towards this) at the next relevant team meeting (tagged both and as I'm not sure which has \"jurisdiction\" here).\nIt'd be helpful to know how much rustbuild can \"trust\" the compiler here. Can we keep incremental artifact directories around from previous versions of the compiler?\nThe compiler can only work with incremental artifacts that it created itself. For any other version of the compiler it has to assume that it's incompatible. It usually just ignores such incompatible artifacts, but in the bootstrapping scenario, it might not always be smart enough to do that.\nI think the question here is will rustc's incremental artifacts correctly deal with crates changing (including adding and removing crates) in the sysroot. If so, we could keep the folder around. Cargo assumes the sysroot is unchanging, which is why we remove the directory in the first place.\nIncremental doesn't care about vs , so I'd expect it to be pretty resilient here.\ntriage: P-medium. (this inefficiency in incremental comp. would be nice to resolve, but is not P-high IMO.)", "positive_passages": [{"docid": "doc-en-rust-d645c75ba90a4081207db3103439b1bcb0c9033b5d6299a205f1ce0b23845741", "text": "deps.push((path_to_add.into(), false)); } // Now we want to update the contents of the stamp file, if necessary. First // we read off the previous contents along with its mtime. If our new // contents (the list of files to copy) is different or if any dep's mtime // is newer then we rewrite the stamp file. deps.sort(); let stamp_contents = fs::read(stamp); let stamp_mtime = mtime(&stamp); let mut new_contents = Vec::new(); let mut max = None; let mut max_path = None; for (dep, proc_macro) in deps.iter() { let mtime = mtime(dep); if Some(mtime) > max { max = Some(mtime); max_path = Some(dep.clone()); } new_contents.extend(if *proc_macro { b\"h\" } else { b\"t\" }); new_contents.extend(dep.to_str().unwrap().as_bytes()); new_contents.extend(b\"0\"); } let max = max.unwrap(); let max_path = max_path.unwrap(); let contents_equal = stamp_contents .map(|contents| contents == new_contents) .unwrap_or_default(); if contents_equal && max <= stamp_mtime { builder.verbose(&format!(\"not updating {:?}; contents equal and {:?} <= {:?}\", stamp, max, stamp_mtime)); return deps.into_iter().map(|(d, _)| d).collect() } if max > stamp_mtime { builder.verbose(&format!(\"updating {:?} as {:?} changed\", stamp, max_path)); } else { builder.verbose(&format!(\"updating {:?} as deps changed\", stamp)); } t!(fs::write(&stamp, &new_contents)); deps.into_iter().map(|(d, _)| d).collect() }", "commid": "rust_pr_63470"}], "negative_passages": []}
{"query_id": "q-en-rust-acf00203f583eb2fb99b4f1d4b8f45d078b66e984d8f1c104d0d0e25287d9bcf", "query": "On rustc 1.31.0-nightly ( 2018-10-07) and x8664-unknown-linux-gnu, we see the following surprising behavior: In release mode it seems the write has not happened in the place that it should. Mentioning threadlocal tracking issue: .\nAh, apparently this is missing a . With the behavior is correct. We need to make sure mutating a thread_local static that does not have does not compile.\ncc Did fix this by any chance?\nIt looks like this is fixed on nightly; I'm guessing is the reason. We probably should add some regression tests for the cases listed here.", "positive_passages": [{"docid": "doc-en-rust-b779fa5dbd909bb88dbfffd6c9bc396b7a06bfb84a805ac3fb42744794df8b82", "text": " error[E0594]: cannot assign to immutable static item `S` --> $DIR/thread-local-mutation.rs:11:5 | LL | S = \"after\"; //~ ERROR cannot assign to immutable | ^^^^^^^^^^^ cannot assign error: aborting due to previous error For more information about this error, try `rustc --explain E0594`. ", "commid": "rust_pr_57107"}], "negative_passages": []}
{"query_id": "q-en-rust-acf00203f583eb2fb99b4f1d4b8f45d078b66e984d8f1c104d0d0e25287d9bcf", "query": "On rustc 1.31.0-nightly ( 2018-10-07) and x8664-unknown-linux-gnu, we see the following surprising behavior: In release mode it seems the write has not happened in the place that it should. Mentioning threadlocal tracking issue: .\nAh, apparently this is missing a . With the behavior is correct. We need to make sure mutating a thread_local static that does not have does not compile.\ncc Did fix this by any chance?\nIt looks like this is fixed on nightly; I'm guessing is the reason. We probably should add some regression tests for the cases listed here.", "positive_passages": [{"docid": "doc-en-rust-d7d3d35c5f8df570724aee609f46e2c1feebd6b2c3672a7002747a1c00594f25", "text": " // Regression test for #54901: immutable thread locals could be mutated. See: // https://github.com/rust-lang/rust/issues/29594#issuecomment-328177697 // https://github.com/rust-lang/rust/issues/54901 #![feature(thread_local)] #[thread_local] static S: &str = \"before\"; fn set_s() { S = \"after\"; //~ ERROR cannot assign to immutable } fn main() { println!(\"{}\", S); set_s(); println!(\"{}\", S); } ", "commid": "rust_pr_57107"}], "negative_passages": []}
{"query_id": "q-en-rust-acf00203f583eb2fb99b4f1d4b8f45d078b66e984d8f1c104d0d0e25287d9bcf", "query": "On rustc 1.31.0-nightly ( 2018-10-07) and x8664-unknown-linux-gnu, we see the following surprising behavior: In release mode it seems the write has not happened in the place that it should. Mentioning threadlocal tracking issue: .\nAh, apparently this is missing a . With the behavior is correct. We need to make sure mutating a thread_local static that does not have does not compile.\ncc Did fix this by any chance?\nIt looks like this is fixed on nightly; I'm guessing is the reason. We probably should add some regression tests for the cases listed here.", "positive_passages": [{"docid": "doc-en-rust-d835b88c37b627e38319ac36c22855a98a05372accdefee9240062ccdc38f082", "text": " error[E0594]: cannot assign to immutable thread-local static item --> $DIR/thread-local-mutation.rs:11:5 | LL | S = \"after\"; //~ ERROR cannot assign to immutable | ^^^^^^^^^^^ error: aborting due to previous error For more information about this error, try `rustc --explain E0594`. ", "commid": "rust_pr_57107"}], "negative_passages": []}
{"query_id": "q-en-rust-cc238fb102f31a667536e1fb25dfa64d12f7387a13f1a73115a9c198577ccf85", "query": "The following code gives \"warning: enum is never used: \": This regressed pretty recently. cc\n27.0 does not warn, 1.28.0 does.\ntriage: P-medium; This seems like a papercut that should be easy to work-around (at least if one isn't using ...).\nswitched to P-high after prompting from\nwhoops forgot to tag with T-compiler in time for the meeting...\nassigning to\nI'm grabbing this, I think I broke it and it should be easy to fix", "positive_passages": [{"docid": "doc-en-rust-7da4a6c535bd013a1a34ebc3b8d9832e6a78c15cfd4fbad08f82a7b07373344d", "text": "hir::ItemKind::Fn(..) | hir::ItemKind::Ty(..) | hir::ItemKind::Static(..) | hir::ItemKind::Existential(..) | hir::ItemKind::Const(..) => { intravisit::walk_item(self, &item); }", "commid": "rust_pr_56456"}], "negative_passages": []}
{"query_id": "q-en-rust-cc238fb102f31a667536e1fb25dfa64d12f7387a13f1a73115a9c198577ccf85", "query": "The following code gives \"warning: enum is never used: \": This regressed pretty recently. cc\n27.0 does not warn, 1.28.0 does.\ntriage: P-medium; This seems like a papercut that should be easy to work-around (at least if one isn't using ...).\nswitched to P-high after prompting from\nwhoops forgot to tag with T-compiler in time for the meeting...\nassigning to\nI'm grabbing this, I think I broke it and it should be easy to fix", "positive_passages": [{"docid": "doc-en-rust-4a09586829fae00306099e0efc5d6d77679dd97d44900b73f10e4696299da462", "text": " // compile-pass #[deny(warnings)] enum Empty { } trait Bar let constness = if cx.tcx.is_const_fn(did) { let constness = if cx.tcx.is_min_const_fn(did) { hir::Constness::Const } else { hir::Constness::NotConst", "commid": "rust_pr_56845"}], "negative_passages": []}
{"query_id": "q-en-rust-7a9dd2c811046b2412a3941dae7ab759f6b8149aefadd51f8e28ba2c70b8b2ef", "query": "Regression from 1.26.0 to 1.27.0 and later. See for example See also and", "positive_passages": [{"docid": "doc-en-rust-4ba56c5bdbb2816a2d4038fb40400b5aebd87b4a2fedfe506a560a74ebe9dd8e", "text": "(self.generics.clean(cx), (&self.decl, self.body).clean(cx)) }); let did = cx.tcx.hir().local_def_id(self.id); let constness = if cx.tcx.is_min_const_fn(did) { hir::Constness::Const } else { hir::Constness::NotConst }; Item { name: Some(self.name.clean(cx)), attrs: self.attrs.clean(cx),", "commid": "rust_pr_56845"}], "negative_passages": []}
{"query_id": "q-en-rust-7a9dd2c811046b2412a3941dae7ab759f6b8149aefadd51f8e28ba2c70b8b2ef", "query": "Regression from 1.26.0 to 1.27.0 and later. See for example See also and", "positive_passages": [{"docid": "doc-en-rust-72dfd5ce98bda4b7042b0c07064728c6e95da46fd78500b70a3f2759ca75ee0e", "text": "visibility: self.vis.clean(cx), stability: self.stab.clean(cx), deprecation: self.depr.clean(cx), def_id: cx.tcx.hir().local_def_id(self.id), def_id: did, inner: FunctionItem(Function { decl, generics, header: self.header, header: hir::FnHeader { constness, ..self.header }, }), } }", "commid": "rust_pr_56845"}], "negative_passages": []}
{"query_id": "q-en-rust-7a9dd2c811046b2412a3941dae7ab759f6b8149aefadd51f8e28ba2c70b8b2ef", "query": "Regression from 1.26.0 to 1.27.0 and later. See for example See also and", "positive_passages": [{"docid": "doc-en-rust-0d5d2ab69064cdbd025c4c5d2270adaaa5ef7a590f858771f82c16b6062bd972", "text": "ty::TraitContainer(_) => self.defaultness.has_value() }; if provided { let constness = if cx.tcx.is_const_fn(self.def_id) { let constness = if cx.tcx.is_min_const_fn(self.def_id) { hir::Constness::Const } else { hir::Constness::NotConst", "commid": "rust_pr_56845"}], "negative_passages": []}
{"query_id": "q-en-rust-7a9dd2c811046b2412a3941dae7ab759f6b8149aefadd51f8e28ba2c70b8b2ef", "query": "Regression from 1.26.0 to 1.27.0 and later. See for example See also and", "positive_passages": [{"docid": "doc-en-rust-7f7d2422d27aa4cba089905e4d08ec078cd41ec42eea84fc5f0b05ad650b9143", "text": " // Copyright 2018 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0 let mut select = SelectionContext::new(&infcx); let mut select = SelectionContext::with_negative(&infcx, true); let mut already_visited = FxHashSet::default(); let mut predicates = VecDeque::new();", "commid": "rust_pr_55356"}], "negative_passages": []}
{"query_id": "q-en-rust-2217afa2e9e35bdcd853e297fcbcbc4fbd5ffb1834a44a72593a376d295913ea", "query": "Hi, I noticed a weird internal error using . The smallest example I could make to reproduce the bug is an basic library project () whose contains: , , and all work fine; and as mentioned in the code, the place where is declared seems to matter. Here is the output and trace of : [log was from an out of date nightly rust, see my comment below for the log for the latest nightly as of this edit] Have a nice day.\nMy bad, I appear to have accidentally used a version of rust that wasn't up to date. The bug still occurs in the latest nightly, here's the backtrace: Sorry again about that.\nIt looks like is only checking for explicit negative impls at the very beginning, and not throughout the process. I should have a fix for this later today.\nLooks like this error has been there the whole time that synthetic impls have been there - that sample fails all the way back to 1.26.0.", "positive_passages": [{"docid": "doc-en-rust-ee5eebb676cdd9e34949a00c72b98755d71a3839f38e5210ce083388a1b88c36", "text": "match &result { &Ok(Some(ref vtable)) => { // If we see an explicit negative impl (e.g. 'impl !Send for MyStruct'), // we immediately bail out, since it's impossible for us to continue. match vtable { Vtable::VtableImpl(VtableImplData { impl_def_id, .. }) => { // Blame tidy for the weird bracket placement if infcx.tcx.impl_polarity(*impl_def_id) == hir::ImplPolarity::Negative { debug!(\"evaluate_nested_obligations: Found explicit negative impl {:?}, bailing out\", impl_def_id); return None; } }, _ => {} } let obligations = vtable.clone().nested_obligations().into_iter(); if !self.evaluate_nested_obligations(", "commid": "rust_pr_55356"}], "negative_passages": []}
{"query_id": "q-en-rust-2217afa2e9e35bdcd853e297fcbcbc4fbd5ffb1834a44a72593a376d295913ea", "query": "Hi, I noticed a weird internal error using . The smallest example I could make to reproduce the bug is an basic library project () whose contains: , , and all work fine; and as mentioned in the code, the place where is declared seems to matter. Here is the output and trace of : [log was from an out of date nightly rust, see my comment below for the log for the latest nightly as of this edit] Have a nice day.\nMy bad, I appear to have accidentally used a version of rust that wasn't up to date. The bug still occurs in the latest nightly, here's the backtrace: Sorry again about that.\nIt looks like is only checking for explicit negative impls at the very beginning, and not throughout the process. I should have a fix for this later today.\nLooks like this error has been there the whole time that synthetic impls have been there - that sample fails all the way back to 1.26.0.", "positive_passages": [{"docid": "doc-en-rust-916a47e650f4e2ed15de7a42b4bbab9fa94ced01a1ad5a6ef42a3c8a2652e359", "text": " // Copyright 2018 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0 hir::MatchSource::TryDesugar => { // Issue #51632 if let Ok(try_snippet) = self.tcx.sess.source_map().span_to_snippet(arm_span) { err.span_suggestion_with_applicability( arm_span, \"try wrapping with a success variant\", format!(\"Ok({})\", try_snippet), Applicability::MachineApplicable, ); } } hir::MatchSource::TryDesugar => {} _ => { let msg = \"match arm with an incompatible type\"; if self.tcx.sess.source_map().is_multiline(arm_span) {", "commid": "rust_pr_55423"}], "negative_passages": []}
{"query_id": "q-en-rust-08806f10948897901b3add83c0e58fb568c1c7aca0be1b2d7335292f32e0a945", "query": "We're incorrectly suggesting wrapping with when it won't help: It should be suggesting using using instead.\nDuplicate of . Let me commit to submitting a fix for this on (or, less likely, before) Sunday the twenty-eighth (hopefully preserving the -wrapping suggestion in the cases where it is correct, but if we have to scrap the whole suggestion, we will; false-positives are pretty bad)", "positive_passages": [{"docid": "doc-en-rust-1e4985b5942246975c7be74a00b242e8c3492d4b3d8bdb2fd41296d89a828bf4", "text": " // Copyright 2018 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0 ", "commid": "rust_pr_55423"}], "negative_passages": []}
{"query_id": "q-en-rust-08806f10948897901b3add83c0e58fb568c1c7aca0be1b2d7335292f32e0a945", "query": "We're incorrectly suggesting wrapping with when it won't help: It should be suggesting using using instead.\nDuplicate of . Let me commit to submitting a fix for this on (or, less likely, before) Sunday the twenty-eighth (hopefully preserving the -wrapping suggestion in the cases where it is correct, but if we have to scrap the whole suggestion, we will; false-positives are pretty bad)", "positive_passages": [{"docid": "doc-en-rust-1feaf58101a17b5147829d1ea1aad9e4f0c635e3a34ff7a32f221616708e1c27", "text": "// option. This file may not be copied, modified, or distributed // except according to those terms. // run-rustfix #![allow(dead_code)] fn missing_discourses() -> Result //~| HELP try wrapping with a success variant } fn main() {}", "commid": "rust_pr_55423"}], "negative_passages": []}
{"query_id": "q-en-rust-08806f10948897901b3add83c0e58fb568c1c7aca0be1b2d7335292f32e0a945", "query": "We're incorrectly suggesting wrapping with when it won't help: It should be suggesting using using instead.\nDuplicate of . Let me commit to submitting a fix for this on (or, less likely, before) Sunday the twenty-eighth (hopefully preserving the -wrapping suggestion in the cases where it is correct, but if we have to scrap the whole suggestion, we will; false-positives are pretty bad)", "positive_passages": [{"docid": "doc-en-rust-d1bc2b172cb70190c65a14b5dfa6b929349378348910929ad014f2a551242d21", "text": "error[E0308]: try expression alternatives have incompatible types --> $DIR/issue-51632-try-desugar-incompatible-types.rs:20:5 --> $DIR/issue-51632-try-desugar-incompatible-types.rs:18:5 | LL | missing_discourses()? | ^^^^^^^^^^^^^^^^^^^^^ | | | expected enum `std::result::Result`, found isize | help: try wrapping with a success variant: `Ok(missing_discourses()?)` | ^^^^^^^^^^^^^^^^^^^^^ expected enum `std::result::Result`, found isize | = note: expected type `std::result::Result let (return_span, mir_description) = if let hir::ExprKind::Closure(_, _, _, span, gen_move) = tcx.hir.expect_expr(mir_node_id).node { ( tcx.sess.source_map().end_point(span), if gen_move.is_some() { \" of generator\" } else { \" of closure\" }, ) } else { // unreachable? (mir.span, \"\") }; let (return_span, mir_description) = match tcx.hir.get(mir_node_id) { hir::Node::Expr(hir::Expr { node: hir::ExprKind::Closure(_, _, _, span, gen_move), .. }) => ( tcx.sess.source_map().end_point(*span), if gen_move.is_some() { \" of generator\" } else { \" of closure\" }, ), hir::Node::ImplItem(hir::ImplItem { node: hir::ImplItemKind::Method(method_sig, _), .. }) => (method_sig.decl.output.span(), \"\"), _ => (mir.span, \"\"), }; Some(RegionName { // This counter value will already have been used, so this function will increment it", "commid": "rust_pr_55822"}], "negative_passages": []}
{"query_id": "q-en-rust-083112a339d828de6cb363d898877139d9854d657aa7455b3cb276508771ddbb", "query": "Hi, I'm not sure if this has been reported or is known at all. If it has feel free to close it out. This example: causes an ICE on rustc 1.31.0-nightly ( 2018-10-25) This is the error I'm getting, backtrace included:\nOkay, the good news here is that I only see this ICE with ; it does not arise with the NLL migration mode being deployed as part of the 2018 edition. See e.g. this .\nThe code in question is rejected by AST-borrowck. It looks like it is being rejected by NLL as well, and the problem is that we hit an error when trying to generated our diagnostic. Therefore, I'm going to tag this as NLL-sound (since this seems like a case where we want to reject the code in question) and NLL-diagnostics (since the ICE itself is stemming from diagnostics code). And I think I'm going to make it P-high, because 1. ICE's are super annoying and 2. I bet its not hard to fix. But I'm not going to tag it with the Release milestone, because its not something we need to worry about backporting to the beta channel (because it does not arise with NLL migration mode).\nactually I'll remove NLL-sound; this isn't a case where NLL is incorrectly accepting the code. Its just incorrectly behaving while generating the diagnostics.", "positive_passages": [{"docid": "doc-en-rust-fe3571440a5d3467e937ea21452b4aba874cd1d3ec45b413ed432c6e11c7c49e", "text": " // Copyright 2017 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0 /// let x: &mut isize = ...; /// let y = || *x += 5; /// let x: &mut isize = ...; /// let y = || *x += 5; /// /// If we were to try to translate this closure into a more explicit /// form, we'd encounter an error with the code as written: /// /// struct Env { x: & &mut isize } /// let x: &mut isize = ...; /// let y = (&mut Env { &x }, fn_ptr); // Closure is pair of env and fn /// fn fn_ptr(env: &mut Env) { **env.x += 5; } /// struct Env { x: & &mut isize } /// let x: &mut isize = ...; /// let y = (&mut Env { &x }, fn_ptr); // Closure is pair of env and fn /// fn fn_ptr(env: &mut Env) { **env.x += 5; } /// /// This is then illegal because you cannot mutate an `&mut` found /// in an aliasable location. To solve, you'd have to translate with /// an `&mut` borrow: /// /// struct Env { x: & &mut isize } /// let x: &mut isize = ...; /// let y = (&mut Env { &mut x }, fn_ptr); // changed from &x to &mut x /// fn fn_ptr(env: &mut Env) { **env.x += 5; } /// struct Env { x: & &mut isize } /// let x: &mut isize = ...; /// let y = (&mut Env { &mut x }, fn_ptr); // changed from &x to &mut x /// fn fn_ptr(env: &mut Env) { **env.x += 5; } /// /// Now the assignment to `**env.x` is legal, but creating a /// mutable pointer to `x` is not because `x` is not mutable. We", "commid": "rust_pr_55696"}], "negative_passages": []}
{"query_id": "q-en-rust-a6cf6888a20a53576f640282b3b54092b9083b5ffbc6a4995132c19686109e1b", "query": "review comments:\nI've looked into this some - NLL misses three errors, all of the form u.yu.x`. In each of the three cases, there is a mutable borrow of some field of the union and then a shared borrow of some other field immediately following. The issue seems to be that the mutable borrow is killed straight away as it isn't used later - therefore not causing a conflict with the shared borrow. I'm inclined to think that this is fine - and that NLL is correct in having less errors here. However, we might want to update the test to use the first mutable borrow for each case in order to make the error happen and demonstrate the diagnostic - if this is the case, I've that we can open a PR for that will update the test. cc\nI'm inclined to agree that this is an instance where the test was not robust and that we need to add uses of those first mutable borrows.\nI've submitted a PR () that makes the test more robust.", "positive_passages": [{"docid": "doc-en-rust-81521d73c0d13cbab2c75152ab433ecefaf2f2984938861b6119afbeaa1f4d1a", "text": " error[E0502]: cannot borrow `u.y` as immutable because it is also borrowed as mutable --> $DIR/union-borrow-move-parent-sibling.rs:25:13 | LL | let a = &mut u.x.0; | ---------- mutable borrow occurs here LL | let b = &u.y; //~ ERROR cannot borrow `u.y` | ^^^^ immutable borrow occurs here LL | use_borrow(a); | - mutable borrow later used here error[E0382]: use of moved value: `u` --> $DIR/union-borrow-move-parent-sibling.rs:29:13 --> $DIR/union-borrow-move-parent-sibling.rs:32:13 | LL | let a = u.x.0; | ----- value moved here LL | let a = u.y; //~ ERROR use of moved value: `u.y` LL | let b = u.y; //~ ERROR use of moved value: `u.y` | ^^^ value used here after move | = note: move occurs because `u` has type `U`, which does not implement the `Copy` trait error[E0502]: cannot borrow `u.y` as immutable because it is also borrowed as mutable --> $DIR/union-borrow-move-parent-sibling.rs:38:13 | LL | let a = &mut (u.x.0).0; | -------------- mutable borrow occurs here LL | let b = &u.y; //~ ERROR cannot borrow `u.y` | ^^^^ immutable borrow occurs here LL | use_borrow(a); | - mutable borrow later used here error[E0382]: use of moved value: `u` --> $DIR/union-borrow-move-parent-sibling.rs:41:13 --> $DIR/union-borrow-move-parent-sibling.rs:45:13 | LL | let a = (u.x.0).0; | --------- value moved here LL | let a = u.y; //~ ERROR use of moved value: `u.y` LL | let b = u.y; //~ ERROR use of moved value: `u.y` | ^^^ value used here after move | = note: move occurs because `u` has type `U`, which does not implement the `Copy` trait error[E0502]: cannot borrow `u.x` as immutable because it is also borrowed as mutable --> $DIR/union-borrow-move-parent-sibling.rs:51:13 | LL | let a = &mut *u.y; | --------- mutable borrow occurs here LL | let b = &u.x; //~ ERROR cannot borrow `u` (via `u.x`) | ^^^^ immutable borrow occurs here LL | use_borrow(a); | - mutable borrow later used here error[E0382]: use of moved value: `u` --> $DIR/union-borrow-move-parent-sibling.rs:53:13 --> $DIR/union-borrow-move-parent-sibling.rs:58:13 | LL | let a = *u.y; | ---- value moved here LL | let a = u.x; //~ ERROR use of moved value: `u.x` LL | let b = u.x; //~ ERROR use of moved value: `u.x` | ^^^ value used here after move | = note: move occurs because `u` has type `U`, which does not implement the `Copy` trait error: aborting due to 3 previous errors error: aborting due to 6 previous errors For more information about this error, try `rustc --explain E0382`. Some errors occurred: E0382, E0502. For more information about an error, try `rustc --explain E0382`. ", "commid": "rust_pr_55696"}], "negative_passages": []}
{"query_id": "q-en-rust-a6cf6888a20a53576f640282b3b54092b9083b5ffbc6a4995132c19686109e1b", "query": "review comments:\nI've looked into this some - NLL misses three errors, all of the form u.yu.x`. In each of the three cases, there is a mutable borrow of some field of the union and then a shared borrow of some other field immediately following. The issue seems to be that the mutable borrow is killed straight away as it isn't used later - therefore not causing a conflict with the shared borrow. I'm inclined to think that this is fine - and that NLL is correct in having less errors here. However, we might want to update the test to use the first mutable borrow for each case in order to make the error happen and demonstrate the diagnostic - if this is the case, I've that we can open a PR for that will update the test. cc\nI'm inclined to agree that this is an instance where the test was not robust and that we need to add uses of those first mutable borrows.\nI've submitted a PR () that makes the test more robust.", "positive_passages": [{"docid": "doc-en-rust-9ecc1462aa9224cc27877fbff45df6b6f55bd2741f2167e98b1506a03ea4f629", "text": "y: Box let a = &u.y; //~ ERROR cannot borrow `u.y` let b = &u.y; //~ ERROR cannot borrow `u.y` use_borrow(a); } unsafe fn parent_sibling_move() { let u = U { x: ((Vec::new(), Vec::new()), Vec::new()) }; let a = u.x.0; let a = u.y; //~ ERROR use of moved value: `u.y` let b = u.y; //~ ERROR use of moved value: `u.y` } unsafe fn grandparent_sibling_borrow() { let mut u = U { x: ((Vec::new(), Vec::new()), Vec::new()) }; let a = &mut (u.x.0).0; let a = &u.y; //~ ERROR cannot borrow `u.y` let b = &u.y; //~ ERROR cannot borrow `u.y` use_borrow(a); } unsafe fn grandparent_sibling_move() { let u = U { x: ((Vec::new(), Vec::new()), Vec::new()) }; let a = (u.x.0).0; let a = u.y; //~ ERROR use of moved value: `u.y` let b = u.y; //~ ERROR use of moved value: `u.y` } unsafe fn deref_sibling_borrow() { let mut u = U { y: Box::default() }; let a = &mut *u.y; let a = &u.x; //~ ERROR cannot borrow `u` (via `u.x`) let b = &u.x; //~ ERROR cannot borrow `u` (via `u.x`) use_borrow(a); } unsafe fn deref_sibling_move() { let u = U { x: ((Vec::new(), Vec::new()), Vec::new()) }; let a = *u.y; let a = u.x; //~ ERROR use of moved value: `u.x` let b = u.x; //~ ERROR use of moved value: `u.x` }", "commid": "rust_pr_55696"}], "negative_passages": []}
{"query_id": "q-en-rust-a6cf6888a20a53576f640282b3b54092b9083b5ffbc6a4995132c19686109e1b", "query": "review comments:\nI've looked into this some - NLL misses three errors, all of the form u.yu.x`. In each of the three cases, there is a mutable borrow of some field of the union and then a shared borrow of some other field immediately following. The issue seems to be that the mutable borrow is killed straight away as it isn't used later - therefore not causing a conflict with the shared borrow. I'm inclined to think that this is fine - and that NLL is correct in having less errors here. However, we might want to update the test to use the first mutable borrow for each case in order to make the error happen and demonstrate the diagnostic - if this is the case, I've that we can open a PR for that will update the test. cc\nI'm inclined to agree that this is an instance where the test was not robust and that we need to add uses of those first mutable borrows.\nI've submitted a PR () that makes the test more robust.", "positive_passages": [{"docid": "doc-en-rust-4f2bf96149b05081c215424a00bee0347c9a17c18221b7570f25412ac9712d0d", "text": "error[E0502]: cannot borrow `u.y` as immutable because `u.x.0` is also borrowed as mutable --> $DIR/union-borrow-move-parent-sibling.rs:23:14 --> $DIR/union-borrow-move-parent-sibling.rs:25:14 | LL | let a = &mut u.x.0; | ----- mutable borrow occurs here LL | let a = &u.y; //~ ERROR cannot borrow `u.y` LL | let b = &u.y; //~ ERROR cannot borrow `u.y` | ^^^ immutable borrow occurs here LL | use_borrow(a); LL | } | - mutable borrow ends here error[E0382]: use of moved value: `u.y` --> $DIR/union-borrow-move-parent-sibling.rs:29:9 --> $DIR/union-borrow-move-parent-sibling.rs:32:9 | LL | let a = u.x.0; | - value moved here LL | let a = u.y; //~ ERROR use of moved value: `u.y` LL | let b = u.y; //~ ERROR use of moved value: `u.y` | ^ value used here after move | = note: move occurs because `u.y` has type `[type error]`, which does not implement the `Copy` trait error[E0502]: cannot borrow `u.y` as immutable because `u.x.0.0` is also borrowed as mutable --> $DIR/union-borrow-move-parent-sibling.rs:35:14 --> $DIR/union-borrow-move-parent-sibling.rs:38:14 | LL | let a = &mut (u.x.0).0; | --------- mutable borrow occurs here LL | let a = &u.y; //~ ERROR cannot borrow `u.y` LL | let b = &u.y; //~ ERROR cannot borrow `u.y` | ^^^ immutable borrow occurs here LL | use_borrow(a); LL | } | - mutable borrow ends here error[E0382]: use of moved value: `u.y` --> $DIR/union-borrow-move-parent-sibling.rs:41:9 --> $DIR/union-borrow-move-parent-sibling.rs:45:9 | LL | let a = (u.x.0).0; | - value moved here LL | let a = u.y; //~ ERROR use of moved value: `u.y` LL | let b = u.y; //~ ERROR use of moved value: `u.y` | ^ value used here after move | = note: move occurs because `u.y` has type `[type error]`, which does not implement the `Copy` trait error[E0502]: cannot borrow `u` (via `u.x`) as immutable because `u` is also borrowed as mutable (via `*u.y`) --> $DIR/union-borrow-move-parent-sibling.rs:47:14 --> $DIR/union-borrow-move-parent-sibling.rs:51:14 | LL | let a = &mut *u.y; | ---- mutable borrow occurs here (via `*u.y`) LL | let a = &u.x; //~ ERROR cannot borrow `u` (via `u.x`) LL | let b = &u.x; //~ ERROR cannot borrow `u` (via `u.x`) | ^^^ immutable borrow occurs here (via `u.x`) LL | use_borrow(a); LL | } | - mutable borrow ends here error[E0382]: use of moved value: `u.x` --> $DIR/union-borrow-move-parent-sibling.rs:53:9 --> $DIR/union-borrow-move-parent-sibling.rs:58:9 | LL | let a = *u.y; | - value moved here LL | let a = u.x; //~ ERROR use of moved value: `u.x` LL | let b = u.x; //~ ERROR use of moved value: `u.x` | ^ value used here after move | = note: move occurs because `u.x` has type `[type error]`, which does not implement the `Copy` trait", "commid": "rust_pr_55696"}], "negative_passages": []}
{"query_id": "q-en-rust-b3fd400b775bbca5ae324dc8e92392ef8e17f812be5beb6d50135a530759903a", "query": "I have two crates: Notice that the implementation of contains an . Something is not happy about that. This currently affects Serde trait impls that use a private helper type with a Serde derive, which is a common pattern. The following script reproduces the issue as of rustc 1.31.0-beta.4 ( 2018-11-01) as well as rustc 1.32.0-nightly ( 2018-11-07).\nMentioning who seems to know about extern_prelude.\nThe issue is caused by entering infinite recursion when trying to print the impl path. If returns and finishes , then this issue should be fixed as well.\n(Also, the issue reproduces on 2015 edition as well.)\nI'm hitting this on stable 1.32.0 with a , and I confirmed that replacing it with a manual impl works. Is there any other workaround?\nJust ran into this on with the serde helper pattern.\nCopying debug!(\"try_print_visible_def_path: def_id={:?}\", def_id); return Ok(( if !span.is_dummy() { self.print_def_path(def_id, &[])? } else { self.path_crate(cnum)? }, true, )); // NOTE(eddyb) the only reason `span` might be dummy, // that we're aware of, is that it's the `std`/`core` // `extern crate` injected by default. // FIXME(eddyb) find something better to key this on, // or avoid ending up with `ExternCrateSource::Extern`, // for the injected `std`/`core`. if span.is_dummy() { return Ok((self.path_crate(cnum)?, true)); } // Disable `try_print_trimmed_def_path` behavior within // the `print_def_path` call, to avoid infinite recursion // in cases where the `extern crate foo` has non-trivial // parents, e.g. it's nested in `impl foo::Trait for Bar` // (see also issues #55779 and #87932). self = with_no_visible_paths(|| self.print_def_path(def_id, &[]))?; return Ok((self, true)); } (ExternCrateSource::Path, LOCAL_CRATE) => { debug!(\"try_print_visible_def_path: def_id={:?}\", def_id); return Ok((self.path_crate(cnum)?, true)); } _ => {}", "commid": "rust_pr_89738"}], "negative_passages": []}
{"query_id": "q-en-rust-b3fd400b775bbca5ae324dc8e92392ef8e17f812be5beb6d50135a530759903a", "query": "I have two crates: Notice that the implementation of contains an . Something is not happy about that. This currently affects Serde trait impls that use a private helper type with a Serde derive, which is a common pattern. The following script reproduces the issue as of rustc 1.31.0-beta.4 ( 2018-11-01) as well as rustc 1.32.0-nightly ( 2018-11-07).\nMentioning who seems to know about extern_prelude.\nThe issue is caused by entering infinite recursion when trying to print the impl path. If returns and finishes , then this issue should be fixed as well.\n(Also, the issue reproduces on 2015 edition as well.)\nI'm hitting this on stable 1.32.0 with a , and I confirmed that replacing it with a manual impl works. Is there any other workaround?\nJust ran into this on with the serde helper pattern.\nCopying for (i, &x) in [a, b].iter().enumerate() { let llptr = bx.struct_gep(dest.llval, i as u64); let val = base::from_immediate(bx, x); bx.store_with_flags(val, llptr, dest.align, flags); } let (a_scalar, b_scalar) = match dest.layout.abi { layout::Abi::ScalarPair(ref a, ref b) => (a, b), _ => bug!(\"store_with_flags: invalid ScalarPair layout: {:#?}\", dest.layout) }; let b_offset = a_scalar.value.size(bx).align_to(b_scalar.value.align(bx).abi); let llptr = bx.struct_gep(dest.llval, 0); let val = base::from_immediate(bx, a); let align = dest.align; bx.store_with_flags(val, llptr, align, flags); let llptr = bx.struct_gep(dest.llval, 1); let val = base::from_immediate(bx, b); let align = dest.align.restrict_for_offset(b_offset); bx.store_with_flags(val, llptr, align, flags); } } }", "commid": "rust_pr_56300"}], "negative_passages": []}
{"query_id": "q-en-rust-f0cf2cf6bd9337f63f5574e2be6a701ee174858f859f97a8aaffea8d69427701", "query": "reduced from num_cpus: built with yields cc `\nWhat\u2019s the target? Ah, never mind, I missed that its .\nMore minimal: Issue does not occur on stable (or 1.30.0 nightly), but occurs on rustc beta.\nMarking T-libs as that is likely to be an issue within implementation of , but I didn\u2019t investigate further.\nExcerpt: Clearly both of those adjacent i32 aren't going to be 8-byte aligned ^^ It seems that we're storing to an key with wrong alignment for some reason.\nDon't need to look particularly far, this code generates It looks like some code is assuming that both elements in a scalar pair have the same alignment as the whole pair.\nProbably the easiest way to get to the root cause is to bisect.\nThis code is very likely the culprit: It takes the 0 and 1 GEPs, but uses the same alignment, without offset adjustment.\nOuch, that's bad. To obtain the alignment for both fields, we need to do: Note that we can't use because the components do not match fields (the latter come from the user type definitions, whereas the former are extracted as an optimization).", "positive_passages": [{"docid": "doc-en-rust-5f762bff351e7ad311bf728ad3509e1d7390dcc11abe615241373979e59bfa04", "text": " // compile-flags: -C no-prepopulate-passes #![crate_type=\"rlib\"] #[allow(dead_code)] pub struct Foo println!(\"cargo:rustc-link-lib=shell32\"); } else if target.contains(\"fuchsia\") { println!(\"cargo:rustc-link-lib=zircon\"); println!(\"cargo:rustc-link-lib=fdio\");", "commid": "rust_pr_56568"}], "negative_passages": []}
{"query_id": "q-en-rust-37291114d3cce11e9607eac4f6b604955f59c64820e17ca49f236731ebe8b5c4", "query": "An showed that removing the dependency from LLVM speed up its test suite by a huge amount. For us libstd doesn't depend on directly but it does depend on which depends on (according to the blog post). Specificaly it looks like , the same function as LLVM, is used by us! It'd be great to investigate removing the shell32 dependency from libstd (may be a few other functions here and there). We should run some tests on Windows to see if this improves rustc cycle times!\nBased on my limited investigation, it's likely that replacing with a pure Rust equivalent should be good enough to get rid of the gdi32 dependency in things that only use libstd. So really someone just needs to write that pure Rust implementation which should be easy to do.\nIt would be nice to get rid of this in the compiler too. We spawn a lot of compiler instances during bootstrapping.\nSo now that the change has been implemented, have we actually investigated how much of an impact the change made?\nPerf, an lolbench are both linux only. So nether help with this question. So don't waste your time like I did.\nHere are just number for on 8 cores / 16 threads on Windows 10 v1809: Without the dependency: 333.012 325.587 320.650 With : 310.852 319.165 323.645\nFor what it's worth, though there's no good way to differentiate PR-to-PR due to very high variance, it does look like there was a recent regression in PR build times:
use sys::windows::os::current_exe; use sys::c;
use slice; use ops::Range; use ffi::OsString; use libc::{c_int, c_void}; use fmt; use vec; use core::iter; use slice; use path::PathBuf; pub unsafe fn init(_argc: isize, _argv: *const *const u8) { }", "commid": "rust_pr_56568"}], "negative_passages": []}
{"query_id": "q-en-rust-37291114d3cce11e9607eac4f6b604955f59c64820e17ca49f236731ebe8b5c4", "query": "An showed that removing the dependency from LLVM speed up its test suite by a huge amount. For us libstd doesn't depend on directly but it does depend on which depends on (according to the blog post). Specificaly it looks like , the same function as LLVM, is used by us! It'd be great to investigate removing the shell32 dependency from libstd (may be a few other functions here and there). We should run some tests on Windows to see if this improves rustc cycle times!\nBased on my limited investigation, it's likely that replacing with a pure Rust equivalent should be good enough to get rid of the gdi32 dependency in things that only use libstd. So really someone just needs to write that pure Rust implementation which should be easy to do.\nIt would be nice to get rid of this in the compiler too. We spawn a lot of compiler instances during bootstrapping.\nSo now that the change has been implemented, have we actually investigated how much of an impact the change made?\nPerf, an lolbench are both linux only. So nether help with this question. So don't waste your time like I did.\nHere are just number for on 8 cores / 16 threads on Windows 10 v1809: Without the dependency: 333.012 325.587 320.650 With : 310.852 319.165 323.645\nFor what it's worth, though there's no good way to differentiate PR-to-PR due to very high variance, it does look like there was a recent regression in PR build times: Args { unsafe {
let mut nArgs: c_int = 0; let lpCmdLine = c::GetCommandLineW(); let szArgList = c::CommandLineToArgvW(lpCmdLine, &mut nArgs); // szArcList can be NULL if CommandLinToArgvW failed, // but in that case nArgs is 0 so we won't actually // try to read a null pointer Args { cur: szArgList, range: 0..(nArgs as isize) } let lp_cmd_line = c::GetCommandLineW(); let parsed_args_list = parse_lp_cmd_line( lp_cmd_line as *const u16, || current_exe().map(PathBuf::into_os_string).unwrap_or_else(|_| OsString::new())); Args { parsed_args_list: parsed_args_list.into_iter() } } } /// Implements the Windows command-line argument parsing algorithm. /// /// Microsoft's documentation for the Windows CLI argument format can be found at /// range: Range parsed_args_list: vec::IntoIter fmt::Debug for ArgsInnerDebug<'a> { fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
f.write_str(\"[\")?; let mut first = true; for i in self.args.range.clone() { if !first { f.write_str(\", \")?; } first = false; // Here we do allocation which could be avoided. fmt::Debug::fmt(&unsafe { os_string_from_ptr(*self.args.cur.offset(i)) }, f)?; } f.write_str(\"]\")?; Ok(()) self.args.parsed_args_list.as_slice().fmt(f) } }", "commid": "rust_pr_56568"}], "negative_passages": []}
{"query_id": "q-en-rust-37291114d3cce11e9607eac4f6b604955f59c64820e17ca49f236731ebe8b5c4", "query": "An showed that removing the dependency from LLVM speed up its test suite by a huge amount. For us libstd doesn't depend on directly but it does depend on which depends on (according to the blog post). Specificaly it looks like , the same function as LLVM, is used by us! It'd be great to investigate removing the shell32 dependency from libstd (may be a few other functions here and there). We should run some tests on Windows to see if this improves rustc cycle times!\nBased on my limited investigation, it's likely that replacing with a pure Rust equivalent should be good enough to get rid of the gdi32 dependency in things that only use libstd. So really someone just needs to write that pure Rust implementation which should be easy to do.\nIt would be nice to get rid of this in the compiler too. We spawn a lot of compiler instances during bootstrapping.\nSo now that the change has been implemented, have we actually investigated how much of an impact the change made?\nPerf, an lolbench are both linux only. So nether help with this question. So don't waste your time like I did.\nHere are just number for on 8 cores / 16 threads on Windows 10 v1809: Without the dependency: 333.012 325.587 320.650 With : 310.852 319.165 323.645\nFor what it's worth, though there's no good way to differentiate PR-to-PR due to very high variance, it does look like there was a recent regression in PR build times: unsafe fn os_string_from_ptr(ptr: *mut u16) -> OsString { let mut len = 0; while *ptr.offset(len) != 0 { len += 1; } // Push it onto the list. let ptr = ptr as *const u16; let buf = slice::from_raw_parts(ptr, len as usize); OsStringExt::from_wide(buf) } impl Iterator for Args { type Item = OsString;
fn next(&mut self) -> Option fn next(&mut self) -> Option fn next_back(&mut self) -> Option fn next_back(&mut self) -> Option fn len(&self) -> usize { self.range.len() } fn len(&self) -> usize { self.parsed_args_list.len() } } impl Drop for Args { fn drop(&mut self) { // self.cur can be null if CommandLineToArgvW previously failed, // but LocalFree ignores NULL pointers unsafe { c::LocalFree(self.cur as *mut c_void); } #[cfg(test)] mod tests { use sys::windows::args::*; use ffi::OsString; fn chk(string: &str, parts: &[&str]) { let mut wide: Vec *mut LPCWSTR;
pub fn LocalFree(ptr: *mut c_void); pub fn CommandLineToArgvW(lpCmdLine: *mut LPCWSTR, pNumArgs: *mut c_int) -> *mut *mut u16; pub fn GetTempPathW(nBufferLength: DWORD, lpBuffer: LPCWSTR) -> DWORD; pub fn OpenProcessToken(ProcessHandle: HANDLE,", "commid": "rust_pr_56568"}], "negative_passages": []}
{"query_id": "q-en-rust-37291114d3cce11e9607eac4f6b604955f59c64820e17ca49f236731ebe8b5c4", "query": "An showed that removing the dependency from LLVM speed up its test suite by a huge amount. For us libstd doesn't depend on directly but it does depend on which depends on (according to the blog post). Specificaly it looks like , the same function as LLVM, is used by us! It'd be great to investigate removing the shell32 dependency from libstd (may be a few other functions here and there). We should run some tests on Windows to see if this improves rustc cycle times!\nBased on my limited investigation, it's likely that replacing with a pure Rust equivalent should be good enough to get rid of the gdi32 dependency in things that only use libstd. So really someone just needs to write that pure Rust implementation which should be easy to do.\nIt would be nice to get rid of this in the compiler too. We spawn a lot of compiler instances during bootstrapping.\nSo now that the change has been implemented, have we actually investigated how much of an impact the change made?\nPerf, an lolbench are both linux only. So nether help with this question. So don't waste your time like I did.\nHere are just number for on 8 cores / 16 threads on Windows 10 v1809: Without the dependency: 333.012 325.587 320.650 With : 310.852 319.165 323.645\nFor what it's worth, though there's no good way to differentiate PR-to-PR due to very high variance, it does look like there was a recent regression in PR build times: EXTRACFLAGS := ws2_32.lib userenv.lib shell32.lib advapi32.lib EXTRACFLAGS := ws2_32.lib userenv.lib advapi32.lib else EXTRACFLAGS := -lws2_32 -luserenv endif", "commid": "rust_pr_56568"}], "negative_passages": []}
{"query_id": "q-en-rust-4a19c7d23d60f69a2100a4374157d91427b152775377267059fc3cea45b0ebf4", "query": "The following commands no longer seem to work: With incoming, I just get check-stage2-doc-tutorial'. Stop.` and so on.\nNote: the pull request I just filed fixes the targets, but the manual target is , not as you've written in the bug here. I think if the latter used to work, it's stopped working as-intended. The document name is after all.\nI got the names from\nSorry, that was fallout from reorganization of\nadjusted wiki page", "positive_passages": [{"docid": "doc-en-rust-3bc4e1820fbe2dc736e2c20720d38e3c454e51406fd378d37f6be823026915b6", "text": "perf debuginfo doc
$(foreach docname,$(DOC_TEST_NAMES),$(docname)) $(foreach docname,$(DOC_TEST_NAMES),doc-$(docname)) pretty pretty-rpass pretty-rpass-full ", "commid": "rust_pr_6689"}], "negative_passages": []}
{"query_id": "q-en-rust-96087c85f192a28b83957740fe94347e90bb403d948d5c4623c35dbb0ed6a79d", "query": "The following buggy code: produces the error: This is usefully documented in TRPL (ch08-02-strings), but that is not easily discoverable from the error. Meanwhile, the error itself is a natural one that programmers from many languages (C, C++, etc) might run into. Speaking as someone who spent some time assuming that the issue was related to slices/references/usize (i.e. various new, Rust-specific concepts), not strings and UTF-8 encoding, I wonder whether a special case might be produced, replacing error E0277, when the types involved are specifically strings/strs and integers. :\nWe should add a pointing to in the following code of E0277 errors. The example has a similar case to this one, but the additional code should be something along the lines of This will cause an extra to be displayed for this case. Beyond changing the code, you will also need to add a new test case with the repro case above and (running and possibly updating the test comments for any remaining failing tests).\nI would like to work on this.\ndo not hesitate to reach out either here or on if you need help!\nPossible proposed output from :", "positive_passages": [{"docid": "doc-en-rust-7ef1abcab98ac26a1a9ec02cbb27952fa747c788e98262270ae28951a46f919e", "text": "/// ``` #[lang = \"index\"] #[rustc_on_unimplemented( on( _Self=\"&str\", note=\"you can use `.chars().nth()` or `.bytes().nth()` see chapter in The Book /// should be /// /// ```rust /// // should be /// fn foo(v: &[i32]) { /// assert_eq!(v.len(), 42); /// }", "commid": "rust_pr_73660"}], "negative_passages": []}
{"query_id": "q-en-rust-0286fbc1470d315343e826c245536ac7e84696408891dbe18ff9f46cd1875a69", "query": "When you download the 0.6 tarball and execute ./configure && make, a download occurs. and a wget info dump appears. It would be really awesome if this piece was integrated with the tarball. Not all environments are friendly to downloading arbitrary code off the net. E.g., I might be in a SCIF[1] and experimenting with Rust on a machine disconnected from the public net. Or I might be on a laptop working in a no-wifi area. :-/ (I know this is probably a real low priority issue, but it IS a (usually minor) hassle) [1]\nPlease note that you can avoid that if you already have a (compatible) locally installed rust compiler. Check the --local-rust and --local-rust-root parameters for ./configure. Also, I'm against bundling binaries in source tarballs. They are target-dependent and bundling all of them will make the released archive quite large. Downloading an external bootstrapper is a minor issue for downstream packagers too, though. See\nfar-future\nYeah, you can also just populate the directory in your builddir with an appropriate snapshot tarball. There's not much else we can do here. Putting binaries in a source tarball feels even worse, to me. Closing as I don't expect we're going to much change the requirements here and have several workable workarounds.", "positive_passages": [{"docid": "doc-en-rust-7d345cce6e2e516bbcda4d5727ccc0231b644c6f134c02122b861d92bab584ef", "text": "!preds.is_empty() && { let ty_empty_region = cx.tcx.mk_imm_ref(cx.tcx.lifetimes.re_root_empty, ty); preds.iter().all(|t| { let ty_params = &t.skip_binder().trait_ref.substs.iter().skip(1).collect:: let ty_params = &t .skip_binder() .trait_ref .substs .iter() .skip(1) .collect:: /// // Bad /// println!(\"\"); /// /// // Good /// println!(); /// ``` pub PRINTLN_EMPTY_STRING, style,", "commid": "rust_pr_73660"}], "negative_passages": []}
{"query_id": "q-en-rust-0286fbc1470d315343e826c245536ac7e84696408891dbe18ff9f46cd1875a69", "query": "When you download the 0.6 tarball and execute ./configure && make, a download occurs. and a wget info dump appears. It would be really awesome if this piece was integrated with the tarball. Not all environments are friendly to downloading arbitrary code off the net. E.g., I might be in a SCIF[1] and experimenting with Rust on a machine disconnected from the public net. Or I might be on a laptop working in a no-wifi area. :-/ (I know this is probably a real low priority issue, but it IS a (usually minor) hassle) [1]\nPlease note that you can avoid that if you already have a (compatible) locally installed rust compiler. Check the --local-rust and --local-rust-root parameters for ./configure. Also, I'm against bundling binaries in source tarballs. They are target-dependent and bundling all of them will make the released archive quite large. Downloading an external bootstrapper is a minor issue for downstream packagers too, though. See\nfar-future\nYeah, you can also just populate the directory in your builddir with an appropriate snapshot tarball. There's not much else we can do here. Putting binaries in a source tarball feels even worse, to me. Closing as I don't expect we're going to much change the requirements here and have several workable workarounds.", "positive_passages": [{"docid": "doc-en-rust-03691cbb739e01aaa529e796485fdab0a918922d3d0c34e95b4ae5bac7fde485", "text": "declare_clippy_lint! { /// **What it does:** This lint warns when you use `print!()` with a format /// string that ends in a newline. /// string that /// ends in a newline. /// /// **Why is this bad?** You should use `println!()` instead, which appends the /// newline.", "commid": "rust_pr_73660"}], "negative_passages": []}
{"query_id": "q-en-rust-0286fbc1470d315343e826c245536ac7e84696408891dbe18ff9f46cd1875a69", "query": "When you download the 0.6 tarball and execute ./configure && make, a download occurs. and a wget info dump appears. It would be really awesome if this piece was integrated with the tarball. Not all environments are friendly to downloading arbitrary code off the net. E.g., I might be in a SCIF[1] and experimenting with Rust on a machine disconnected from the public net. Or I might be on a laptop working in a no-wifi area. :-/ (I know this is probably a real low priority issue, but it IS a (usually minor) hassle) [1]\nPlease note that you can avoid that if you already have a (compatible) locally installed rust compiler. Check the --local-rust and --local-rust-root parameters for ./configure. Also, I'm against bundling binaries in source tarballs. They are target-dependent and bundling all of them will make the released archive quite large. Downloading an external bootstrapper is a minor issue for downstream packagers too, though. See\nfar-future\nYeah, you can also just populate the directory in your builddir with an appropriate snapshot tarball. There's not much else we can do here. Putting binaries in a source tarball feels even worse, to me. Closing as I don't expect we're going to much change the requirements here and have several workable workarounds.", "positive_passages": [{"docid": "doc-en-rust-a9acced614468e7a414888121ccd85e5be9deb168ee45bed4d9e8c264afe1269", "text": "/// ```rust /// # use std::fmt::Write; /// # let mut buf = String::new(); /// /// // Bad /// writeln!(buf, \"\"); /// /// // Good /// writeln!(buf); /// ``` pub WRITELN_EMPTY_STRING, style,", "commid": "rust_pr_73660"}], "negative_passages": []}
{"query_id": "q-en-rust-0286fbc1470d315343e826c245536ac7e84696408891dbe18ff9f46cd1875a69", "query": "When you download the 0.6 tarball and execute ./configure && make, a download occurs. and a wget info dump appears. It would be really awesome if this piece was integrated with the tarball. Not all environments are friendly to downloading arbitrary code off the net. E.g., I might be in a SCIF[1] and experimenting with Rust on a machine disconnected from the public net. Or I might be on a laptop working in a no-wifi area. :-/ (I know this is probably a real low priority issue, but it IS a (usually minor) hassle) [1]\nPlease note that you can avoid that if you already have a (compatible) locally installed rust compiler. Check the --local-rust and --local-rust-root parameters for ./configure. Also, I'm against bundling binaries in source tarballs. They are target-dependent and bundling all of them will make the released archive quite large. Downloading an external bootstrapper is a minor issue for downstream packagers too, though. See\nfar-future\nYeah, you can also just populate the directory in your builddir with an appropriate snapshot tarball. There's not much else we can do here. Putting binaries in a source tarball feels even worse, to me. Closing as I don't expect we're going to much change the requirements here and have several workable workarounds.", "positive_passages": [{"docid": "doc-en-rust-5fc47318dfa11fca9603aefa33d7627df8167aee335e1cc01fd856eed38b648c", "text": "/// # use std::fmt::Write; /// # let mut buf = String::new(); /// # let name = \"World\"; /// /// // Bad /// write!(buf, \"Hello {}!n\", name); /// /// // Good /// writeln!(buf, \"Hello {}!\", name); /// ``` pub WRITE_WITH_NEWLINE, style,", "commid": "rust_pr_73660"}], "negative_passages": []}
{"query_id": "q-en-rust-0286fbc1470d315343e826c245536ac7e84696408891dbe18ff9f46cd1875a69", "query": "When you download the 0.6 tarball and execute ./configure && make, a download occurs. and a wget info dump appears. It would be really awesome if this piece was integrated with the tarball. Not all environments are friendly to downloading arbitrary code off the net. E.g., I might be in a SCIF[1] and experimenting with Rust on a machine disconnected from the public net. Or I might be on a laptop working in a no-wifi area. :-/ (I know this is probably a real low priority issue, but it IS a (usually minor) hassle) [1]\nPlease note that you can avoid that if you already have a (compatible) locally installed rust compiler. Check the --local-rust and --local-rust-root parameters for ./configure. Also, I'm against bundling binaries in source tarballs. They are target-dependent and bundling all of them will make the released archive quite large. Downloading an external bootstrapper is a minor issue for downstream packagers too, though. See\nfar-future\nYeah, you can also just populate the directory in your builddir with an appropriate snapshot tarball. There's not much else we can do here. Putting binaries in a source tarball feels even worse, to me. Closing as I don't expect we're going to much change the requirements here and have several workable workarounds.", "positive_passages": [{"docid": "doc-en-rust-39a08f976d9cf2ea042abefeb2e52efb3a54cd4bb85f03f44da638c54f7cfc2b", "text": "/// ```rust /// # use std::fmt::Write; /// # let mut buf = String::new(); /// /// // Bad /// writeln!(buf, \"{}\", \"foo\"); /// /// // Good /// writeln!(buf, \"foo\"); /// ``` pub WRITE_LITERAL, style,", "commid": "rust_pr_73660"}], "negative_passages": []}
{"query_id": "q-en-rust-0286fbc1470d315343e826c245536ac7e84696408891dbe18ff9f46cd1875a69", "query": "When you download the 0.6 tarball and execute ./configure && make, a download occurs. and a wget info dump appears. It would be really awesome if this piece was integrated with the tarball. Not all environments are friendly to downloading arbitrary code off the net. E.g., I might be in a SCIF[1] and experimenting with Rust on a machine disconnected from the public net. Or I might be on a laptop working in a no-wifi area. :-/ (I know this is probably a real low priority issue, but it IS a (usually minor) hassle) [1]\nPlease note that you can avoid that if you already have a (compatible) locally installed rust compiler. Check the --local-rust and --local-rust-root parameters for ./configure. Also, I'm against bundling binaries in source tarballs. They are target-dependent and bundling all of them will make the released archive quite large. Downloading an external bootstrapper is a minor issue for downstream packagers too, though. See\nfar-future\nYeah, you can also just populate the directory in your builddir with an appropriate snapshot tarball. There's not much else we can do here. Putting binaries in a source tarball feels even worse, to me. Closing as I don't expect we're going to much change the requirements here and have several workable workarounds.", "positive_passages": [{"docid": "doc-en-rust-36e124e3f02bec6f5dc8d47d7666f21b90483c5d8afdba78c68a66cead465744", "text": "if let (Some(fmt_str), expr) = self.check_tts(cx, &mac.args.inner_tokens(), true) { if fmt_str.symbol == Symbol::intern(\"\") { let mut applicability = Applicability::MachineApplicable; let suggestion = if let Some(e) = expr { snippet_with_applicability(cx, e.span, \"v\", &mut applicability) } else { applicability = Applicability::HasPlaceholders; Cow::Borrowed(\"v\") let suggestion = match expr { Some(expr) => snippet_with_applicability(cx, expr.span, \"v\", &mut applicability), None => { applicability = Applicability::HasPlaceholders; Cow::Borrowed(\"v\") }, }; span_lint_and_sugg(", "commid": "rust_pr_73660"}], "negative_passages": []}
{"query_id": "q-en-rust-0286fbc1470d315343e826c245536ac7e84696408891dbe18ff9f46cd1875a69", "query": "When you download the 0.6 tarball and execute ./configure && make, a download occurs. and a wget info dump appears. It would be really awesome if this piece was integrated with the tarball. Not all environments are friendly to downloading arbitrary code off the net. E.g., I might be in a SCIF[1] and experimenting with Rust on a machine disconnected from the public net. Or I might be on a laptop working in a no-wifi area. :-/ (I know this is probably a real low priority issue, but it IS a (usually minor) hassle) [1]\nPlease note that you can avoid that if you already have a (compatible) locally installed rust compiler. Check the --local-rust and --local-rust-root parameters for ./configure. Also, I'm against bundling binaries in source tarballs. They are target-dependent and bundling all of them will make the released archive quite large. Downloading an external bootstrapper is a minor issue for downstream packagers too, though. See\nfar-future\nYeah, you can also just populate the directory in your builddir with an appropriate snapshot tarball. There's not much else we can do here. Putting binaries in a source tarball feels even worse, to me. Closing as I don't expect we're going to much change the requirements here and have several workable workarounds.", "positive_passages": [{"docid": "doc-en-rust-19a25cb7546a38869ea883763103fd382bf503f8d4c9a86fe7097c9bd7dd7155", "text": "} fn run_ui_cargo(config: &mut compiletest::Config) { if cargo::is_rustc_test_suite() { return; } fn run_tests( config: &compiletest::Config, filter: &Option if old_binding.is_import() || new_binding.is_import() { let binding = if new_binding.is_import() && !new_binding.span.is_dummy() { new_binding let directive = match (&new_binding.kind, &old_binding.kind) { (NameBindingKind::Import { directive, .. }, _) if !new_binding.span.is_dummy() => Some((directive, new_binding.span)), (_, NameBindingKind::Import { directive, .. }) if !old_binding.span.is_dummy() => Some((directive, old_binding.span)), _ => None, }; if let Some((directive, binding_span)) = directive { let suggested_name = if name.as_str().chars().next().unwrap().is_uppercase() { format!(\"Other{}\", name) } else { old_binding format!(\"other_{}\", name) }; let cm = self.session.source_map(); let rename_msg = \"you can use `as` to change the binding name of the import\"; if let ( Ok(snippet), NameBindingKind::Import { directive, ..}, _dummy @ false, ) = ( cm.span_to_snippet(binding.span), binding.kind.clone(), binding.span.is_dummy(), ) { let suggested_name = if name.as_str().chars().next().unwrap().is_uppercase() { format!(\"Other{}\", name) } else { format!(\"other_{}\", name) }; let mut suggestion = None; match directive.subclass { ImportDirectiveSubclass::SingleImport { type_ns_only: true, .. } => suggestion = Some(format!(\"self as {}\", suggested_name)), ImportDirectiveSubclass::SingleImport { source, .. } => { if let Some(pos) = source.span.hi().0.checked_sub(binding_span.lo().0) .map(|pos| pos as usize) { if let Ok(snippet) = self.session.source_map() .span_to_snippet(binding_span) { if pos <= snippet.len() { suggestion = Some(format!( \"{} as {}{}\", &snippet[..pos], suggested_name, if snippet.ends_with(\";\") { \";\" } else { \"\" } )) } } } } ImportDirectiveSubclass::ExternCrate { source, target, .. } => suggestion = Some(format!( \"extern crate {} as {};\", source.unwrap_or(target.name), suggested_name, )), _ => unreachable!(), } let rename_msg = \"you can use `as` to change the binding name of the import\"; if let Some(suggestion) = suggestion { err.span_suggestion_with_applicability( binding.span, &rename_msg, match directive.subclass { ImportDirectiveSubclass::SingleImport { type_ns_only: true, .. } => format!(\"self as {}\", suggested_name), ImportDirectiveSubclass::SingleImport { source, .. } => format!( \"{} as {}{}\", &snippet[..((source.span.hi().0 - binding.span.lo().0) as usize)], suggested_name, if snippet.ends_with(\";\") { \";\" } else { \"\" } ), ImportDirectiveSubclass::ExternCrate { source, target, .. } => format!( \"extern crate {} as {};\", source.unwrap_or(target.name), suggested_name, ), _ => unreachable!(), }, binding_span, rename_msg, suggestion, Applicability::MaybeIncorrect, ); } else { err.span_label(binding.span, rename_msg); err.span_label(binding_span, rename_msg); } }", "commid": "rust_pr_57908"}], "negative_passages": []}
{"query_id": "q-en-rust-05181987c636b464ffc865e40d019431e5c4904fa34808be89afbb794f54dffa", "query": "As of rustc 1.33.0-nightly ( 2018-12-22) the following macro invocation fails with an ICE. Mentioning and because this may have to do with slicing introduced in . Same repro in script form:\nis working on a solution in", "positive_passages": [{"docid": "doc-en-rust-45ac3499486ef2d242869fb5e78948853d44070866d33c3865bdeb88356d4433", "text": " macro_rules! import { ( $($name:ident),* ) => { $( mod $name; pub use self::$name; //~^ ERROR the name `issue_56411_aux` is defined multiple times //~| ERROR `issue_56411_aux` is private, and cannot be re-exported )* } } import!(issue_56411_aux); fn main() { println!(\"Hello, world!\"); } ", "commid": "rust_pr_57908"}], "negative_passages": []}
{"query_id": "q-en-rust-05181987c636b464ffc865e40d019431e5c4904fa34808be89afbb794f54dffa", "query": "As of rustc 1.33.0-nightly ( 2018-12-22) the following macro invocation fails with an ICE. Mentioning and because this may have to do with slicing introduced in . Same repro in script form:\nis working on a solution in", "positive_passages": [{"docid": "doc-en-rust-a300f928e46834fcc9121fd73d702311f56ca366382d8b716208a1689f9d8da9", "text": " error[E0255]: the name `issue_56411_aux` is defined multiple times --> $DIR/issue-56411.rs:5:21 | LL | mod $name; | ---------- previous definition of the module `issue_56411_aux` here LL | pub use self::$name; | ^^^^^^^^^^^ | | | `issue_56411_aux` reimported here | you can use `as` to change the binding name of the import ... LL | import!(issue_56411_aux); | ------------------------- in this macro invocation | = note: `issue_56411_aux` must be defined only once in the type namespace of this module error[E0365]: `issue_56411_aux` is private, and cannot be re-exported --> $DIR/issue-56411.rs:5:21 | LL | pub use self::$name; | ^^^^^^^^^^^ re-export of private `issue_56411_aux` ... LL | import!(issue_56411_aux); | ------------------------- in this macro invocation | = note: consider declaring type or module `issue_56411_aux` with `pub` error: aborting due to 2 previous errors Some errors occurred: E0255, E0365. For more information about an error, try `rustc --explain E0255`. ", "commid": "rust_pr_57908"}], "negative_passages": []}
{"query_id": "q-en-rust-05181987c636b464ffc865e40d019431e5c4904fa34808be89afbb794f54dffa", "query": "As of rustc 1.33.0-nightly ( 2018-12-22) the following macro invocation fails with an ICE. Mentioning and because this may have to do with slicing introduced in . Same repro in script form:\nis working on a solution in", "positive_passages": [{"docid": "doc-en-rust-8402525181334ba8a9b4780c456a829706ad66ec1bf7966f3e9203a2875fd156", "text": " // compile-pass struct T {} fn main() {} ", "commid": "rust_pr_57908"}], "negative_passages": []}
{"query_id": "q-en-rust-b1a925fd45697a656dc4325a4588180891695688f60bfac4eaa470a152fa105e", "query": "This code should be correct, because according to document, all address in are not global reachable. But it panic on assertion failed in latest nightly (2019-01-11) (see )\nRelevant thread:\nSorry, fixed", "positive_passages": [{"docid": "doc-en-rust-dfa27842d41aaa2c25c402191fc727e6eb3a4f21856d3b65b622677e1871ca70", "text": "/// /// The following return false: /// /// - private address (10.0.0.0/8, 172.16.0.0/12 and 192.168.0.0/16) /// - the loopback address (127.0.0.0/8) /// - the link-local address (169.254.0.0/16) /// - the broadcast address (255.255.255.255/32) /// - test addresses used for documentation (192.0.2.0/24, 198.51.100.0/24 and 203.0.113.0/24) /// - the unspecified address (0.0.0.0) /// - private addresses (see [`is_private()`](#method.is_private)) /// - the loopback address (see [`is_loopback()`](#method.is_loopback)) /// - the link-local address (see [`is_link_local()`](#method.is_link_local)) /// - the broadcast address (see [`is_broadcast()`](#method.is_broadcast)) /// - addresses used for documentation (see [`is_documentation()`](#method.is_documentation)) /// - the unspecified address (see [`is_unspecified()`](#method.is_unspecified)), and the whole /// 0.0.0.0/8 block /// - addresses reserved for future protocols (see /// [`is_ietf_protocol_assignment()`](#method.is_ietf_protocol_assignment), except /// `192.0.0.9/32` and `192.0.0.10/32` which are globally routable /// - addresses reserved for future use (see [`is_reserved()`](#method.is_reserved) /// - addresses reserved for networking devices benchmarking (see /// [`is_benchmarking`](#method.is_benchmarking)) /// /// [ipv4-sr]: https://www.iana.org/assignments/iana-ipv4-special-registry/iana-ipv4-special-registry.xhtml /// [`true`]: ../../std/primitive.bool.html", "commid": "rust_pr_60145"}], "negative_passages": []}
{"query_id": "q-en-rust-b1a925fd45697a656dc4325a4588180891695688f60bfac4eaa470a152fa105e", "query": "This code should be correct, because according to document, all address in are not global reachable. But it panic on assertion failed in latest nightly (2019-01-11) (see )\nRelevant thread:\nSorry, fixed", "positive_passages": [{"docid": "doc-en-rust-2bea958c2e629217af66b006a4bbc3ad041102431890e8b8abd33362125c803d", "text": "/// use std::net::Ipv4Addr; /// /// fn main() { /// // private addresses are not global /// assert_eq!(Ipv4Addr::new(10, 254, 0, 0).is_global(), false); /// assert_eq!(Ipv4Addr::new(192, 168, 10, 65).is_global(), false); /// assert_eq!(Ipv4Addr::new(172, 16, 10, 65).is_global(), false); /// /// // the 0.0.0.0/8 block is not global /// assert_eq!(Ipv4Addr::new(0, 1, 2, 3).is_global(), false); /// // in particular, the unspecified address is not global /// assert_eq!(Ipv4Addr::new(0, 0, 0, 0).is_global(), false); /// /// // the loopback address is not global /// assert_eq!(Ipv4Addr::new(127, 0, 0, 1).is_global(), false); /// /// // link local addresses are not global /// assert_eq!(Ipv4Addr::new(169, 254, 45, 1).is_global(), false); /// /// // the broadcast address is not global /// assert_eq!(Ipv4Addr::new(255, 255, 255, 255).is_global(), false); /// /// // the broadcast address is not global /// assert_eq!(Ipv4Addr::new(192, 0, 2, 255).is_global(), false); /// assert_eq!(Ipv4Addr::new(198, 51, 100, 65).is_global(), false); /// assert_eq!(Ipv4Addr::new(203, 0, 113, 6).is_global(), false); /// /// // shared addresses are not global /// assert_eq!(Ipv4Addr::new(100, 100, 0, 0).is_global(), false); /// /// // addresses reserved for protocol assignment are not global /// assert_eq!(Ipv4Addr::new(192, 0, 0, 0).is_global(), false); /// assert_eq!(Ipv4Addr::new(192, 0, 0, 255).is_global(), false); /// /// // addresses reserved for future use are not global /// assert_eq!(Ipv4Addr::new(250, 10, 20, 30).is_global(), false); /// /// // addresses reserved for network devices benchmarking are not global /// assert_eq!(Ipv4Addr::new(198, 18, 0, 0).is_global(), false); /// /// // All the other addresses are global /// assert_eq!(Ipv4Addr::new(1, 1, 1, 1).is_global(), true); /// assert_eq!(Ipv4Addr::new(80, 9, 12, 3).is_global(), true); /// } /// ``` pub fn is_global(&self) -> bool { !self.is_private() && !self.is_loopback() && !self.is_link_local() && !self.is_broadcast() && !self.is_documentation() && !self.is_unspecified() // check if this address is 192.0.0.9 or 192.0.0.10. These addresses are the only two // globally routable addresses in the 192.0.0.0/24 range. if u32::from(*self) == 0xc0000009 || u32::from(*self) == 0xc000000a { return true; } !self.is_private() && !self.is_loopback() && !self.is_link_local() && !self.is_broadcast() && !self.is_documentation() && !self.is_shared() && !self.is_ietf_protocol_assignment() && !self.is_reserved() && !self.is_benchmarking() // Make sure the address is not in 0.0.0.0/8 && self.octets()[0] != 0 } /// Returns [`true`] if this address is part of the Shared Address Space defined in /// [IETF RFC 6598] (`100.64.0.0/10`). /// /// [IETF RFC 6598]: https://tools.ietf.org/html/rfc6598 /// [`true`]: ../../std/primitive.bool.html /// /// # Examples /// /// ``` /// #![feature(ip)] /// use std::net::Ipv4Addr; /// /// fn main() { /// assert_eq!(Ipv4Addr::new(100, 64, 0, 0).is_shared(), true); /// assert_eq!(Ipv4Addr::new(100, 127, 255, 255).is_shared(), true); /// assert_eq!(Ipv4Addr::new(100, 128, 0, 0).is_shared(), false); /// } /// ``` pub fn is_shared(&self) -> bool { self.octets()[0] == 100 && (self.octets()[1] & 0b1100_0000 == 0b0100_0000) } /// Returns [`true`] if this address is part of `192.0.0.0/24`, which is reserved to /// IANA for IETF protocol assignments, as documented in [IETF RFC 6890]. /// /// Note that parts of this block are in use: /// /// - `192.0.0.8/32` is the \"IPv4 dummy address\" (see [IETF RFC 7600]) /// - `192.0.0.9/32` is the \"Port Control Protocol Anycast\" (see [IETF RFC 7723]) /// - `192.0.0.10/32` is used for NAT traversal (see [IETF RFC 8155]) /// /// [IETF RFC 6890]: https://tools.ietf.org/html/rfc6890 /// [IETF RFC 7600]: https://tools.ietf.org/html/rfc7600 /// [IETF RFC 7723]: https://tools.ietf.org/html/rfc7723 /// [IETF RFC 8155]: https://tools.ietf.org/html/rfc8155 /// [`true`]: ../../std/primitive.bool.html /// /// # Examples /// /// ``` /// #![feature(ip)] /// use std::net::Ipv4Addr; /// /// fn main() { /// assert_eq!(Ipv4Addr::new(192, 0, 0, 0).is_ietf_protocol_assignment(), true); /// assert_eq!(Ipv4Addr::new(192, 0, 0, 8).is_ietf_protocol_assignment(), true); /// assert_eq!(Ipv4Addr::new(192, 0, 0, 9).is_ietf_protocol_assignment(), true); /// assert_eq!(Ipv4Addr::new(192, 0, 0, 255).is_ietf_protocol_assignment(), true); /// assert_eq!(Ipv4Addr::new(192, 0, 1, 0).is_ietf_protocol_assignment(), false); /// assert_eq!(Ipv4Addr::new(191, 255, 255, 255).is_ietf_protocol_assignment(), false); /// } /// ``` pub fn is_ietf_protocol_assignment(&self) -> bool { self.octets()[0] == 192 && self.octets()[1] == 0 && self.octets()[2] == 0 } /// Returns [`true`] if this address part of the `198.18.0.0/15` range, which is reserved for /// network devices benchmarking. This range is defined in [IETF RFC 2544] as `192.18.0.0` /// through `198.19.255.255` but [errata 423] corrects it to `198.18.0.0/15`. /// /// [IETF RFC 1112]: https://tools.ietf.org/html/rfc1112 /// [errate 423]: https://www.rfc-editor.org/errata/eid423 /// [`true`]: ../../std/primitive.bool.html /// /// # Examples /// /// ``` /// #![feature(ip)] /// use std::net::Ipv4Addr; /// /// fn main() { /// assert_eq!(Ipv4Addr::new(198, 17, 255, 255).is_benchmarking(), false); /// assert_eq!(Ipv4Addr::new(198, 18, 0, 0).is_benchmarking(), true); /// assert_eq!(Ipv4Addr::new(198, 19, 255, 255).is_benchmarking(), true); /// assert_eq!(Ipv4Addr::new(198, 20, 0, 0).is_benchmarking(), false); /// } /// ``` pub fn is_benchmarking(&self) -> bool { self.octets()[0] == 198 && (self.octets()[1] & 0xfe) == 18 } /// Returns [`true`] if this address is reserved by IANA for future use. [IETF RFC 1112] /// defines the block of reserved addresses as `240.0.0.0/4`. This range normally includes the /// broadcast address `255.255.255.255`, but this implementation explicitely excludes it, since /// it is obviously not reserved for future use. /// /// [IETF RFC 1112]: https://tools.ietf.org/html/rfc1112 /// [`true`]: ../../std/primitive.bool.html /// /// # Examples /// /// ``` /// #![feature(ip)] /// use std::net::Ipv4Addr; /// /// fn main() { /// assert_eq!(Ipv4Addr::new(240, 0, 0, 0).is_reserved(), true); /// assert_eq!(Ipv4Addr::new(255, 255, 255, 254).is_reserved(), true); /// /// assert_eq!(Ipv4Addr::new(239, 255, 255, 255).is_reserved(), false); /// // The broadcast address is not considered as reserved for future use by this /// // implementation /// assert_eq!(Ipv4Addr::new(255, 255, 255, 255).is_reserved(), false); /// } /// ``` pub fn is_reserved(&self) -> bool { self.octets()[0] & 240 == 240 && !self.is_broadcast() } /// Returns [`true`] if this is a multicast address (224.0.0.0/4).", "commid": "rust_pr_60145"}], "negative_passages": []}
{"query_id": "q-en-rust-b1a925fd45697a656dc4325a4588180891695688f60bfac4eaa470a152fa105e", "query": "This code should be correct, because according to document, all address in are not global reachable. But it panic on assertion failed in latest nightly (2019-01-11) (see )\nRelevant thread:\nSorry, fixed", "positive_passages": [{"docid": "doc-en-rust-68a7c17fc244b0b822c1d45fa64dc3aed9b8c312972850a66d4d090f124edfea", "text": "} } /// Returns [`true`] if this is a unique local address (fc00::/7). /// Returns [`true`] if this is a unique local address (`fc00::/7`). /// /// This property is defined in [IETF RFC 4193]. ///", "commid": "rust_pr_60145"}], "negative_passages": []}
{"query_id": "q-en-rust-b1a925fd45697a656dc4325a4588180891695688f60bfac4eaa470a152fa105e", "query": "This code should be correct, because according to document, all address in are not global reachable. But it panic on assertion failed in latest nightly (2019-01-11) (see )\nRelevant thread:\nSorry, fixed", "positive_passages": [{"docid": "doc-en-rust-c5f9c1bd09b19150b7898577dc686eff15cf9c05e90c69f6e09bab837a6dcafd", "text": "(self.segments()[0] & 0xfe00) == 0xfc00 } /// Returns [`true`] if the address is unicast and link-local (fe80::/10). /// Returns [`true`] if the address is a unicast link-local address (`fe80::/64`). /// /// This property is defined in [IETF RFC 4291]. /// A common mis-conception is to think that \"unicast link-local addresses start with /// `fe80::`\", but the [IETF RFC 4291] actually defines a stricter format for these addresses: /// /// ```no_rust /// | 10 | /// | bits | 54 bits | 64 bits | /// +----------+-------------------------+----------------------------+ /// |1111111010| 0 | interface ID | /// +----------+-------------------------+----------------------------+ /// ``` /// /// This method validates the format defined in the RFC and won't recognize the following /// addresses such as `fe80:0:0:1::` or `fe81::` as unicast link-local addresses for example. /// If you need a less strict validation use [`is_unicast_link_local()`] instead. /// /// # Examples /// /// ``` /// #![feature(ip)] /// /// use std::net::Ipv6Addr; /// /// fn main() { /// let ip = Ipv6Addr::new(0xfe80, 0, 0, 0, 0, 0, 0, 0); /// assert!(ip.is_unicast_link_local_strict()); /// /// let ip = Ipv6Addr::new(0xfe80, 0, 0, 0, 0xffff, 0xffff, 0xffff, 0xffff); /// assert!(ip.is_unicast_link_local_strict()); /// /// let ip = Ipv6Addr::new(0xfe80, 0, 0, 1, 0, 0, 0, 0); /// assert!(!ip.is_unicast_link_local_strict()); /// assert!(ip.is_unicast_link_local()); /// /// let ip = Ipv6Addr::new(0xfe81, 0, 0, 0, 0, 0, 0, 0); /// assert!(!ip.is_unicast_link_local_strict()); /// assert!(ip.is_unicast_link_local()); /// } /// ``` /// /// # See also /// /// - [IETF RFC 4291 section 2.5.6] /// - [RFC 4291 errata 4406] /// - [`is_unicast_link_local()`] /// /// [IETF RFC 4291]: https://tools.ietf.org/html/rfc4291 /// [IETF RFC 4291 section 2.5.6]: https://tools.ietf.org/html/rfc4291#section-2.5.6 /// [`true`]: ../../std/primitive.bool.html /// [RFC 4291 errata 4406]: https://www.rfc-editor.org/errata/eid4406 /// [`is_unicast_link_local()`]: ../../std/net/struct.Ipv6Addr.html#method.is_unicast_link_local /// pub fn is_unicast_link_local_strict(&self) -> bool { (self.segments()[0] & 0xffff) == 0xfe80 && (self.segments()[1] & 0xffff) == 0 && (self.segments()[2] & 0xffff) == 0 && (self.segments()[3] & 0xffff) == 0 } /// Returns [`true`] if the address is a unicast link-local address (`fe80::/10`). /// /// This method returns [`true`] for addresses in the range reserved by [RFC 4291 section 2.4], /// i.e. addresses with the following format: /// /// ```no_rust /// | 10 | /// | bits | 54 bits | 64 bits | /// +----------+-------------------------+----------------------------+ /// |1111111010| arbitratry value | interface ID | /// +----------+-------------------------+----------------------------+ /// ``` /// /// As a result, this method consider addresses such as `fe80:0:0:1::` or `fe81::` to be /// unicast link-local addresses, whereas [`is_unicast_link_local_strict()`] does not. If you /// need a strict validation fully compliant with the RFC, use /// [`is_unicast_link_local_strict()`]. /// /// # Examples ///", "commid": "rust_pr_60145"}], "negative_passages": []}
{"query_id": "q-en-rust-b1a925fd45697a656dc4325a4588180891695688f60bfac4eaa470a152fa105e", "query": "This code should be correct, because according to document, all address in are not global reachable. But it panic on assertion failed in latest nightly (2019-01-11) (see )\nRelevant thread:\nSorry, fixed", "positive_passages": [{"docid": "doc-en-rust-0e327b342bad07743c6df591e11031f3c3e17a64cb4a2633a26b702b53d83b64", "text": "/// use std::net::Ipv6Addr; /// /// fn main() { /// assert_eq!(Ipv6Addr::new(0, 0, 0, 0, 0, 0xffff, 0xc00a, 0x2ff).is_unicast_link_local(), /// false); /// assert_eq!(Ipv6Addr::new(0xfe8a, 0, 0, 0, 0, 0, 0, 0).is_unicast_link_local(), true); /// let ip = Ipv6Addr::new(0xfe80, 0, 0, 0, 0, 0, 0, 0); /// assert!(ip.is_unicast_link_local()); /// /// let ip = Ipv6Addr::new(0xfe80, 0, 0, 0, 0xffff, 0xffff, 0xffff, 0xffff); /// assert!(ip.is_unicast_link_local()); /// /// let ip = Ipv6Addr::new(0xfe80, 0, 0, 1, 0, 0, 0, 0); /// assert!(ip.is_unicast_link_local()); /// assert!(!ip.is_unicast_link_local_strict()); /// /// let ip = Ipv6Addr::new(0xfe81, 0, 0, 0, 0, 0, 0, 0); /// assert!(ip.is_unicast_link_local()); /// assert!(!ip.is_unicast_link_local_strict()); /// } /// ``` /// /// # See also /// /// - [IETF RFC 4291 section 2.4] /// - [RFC 4291 errata 4406] /// /// [IETF RFC 4291 section 2.4]: https://tools.ietf.org/html/rfc4291#section-2.4 /// [`true`]: ../../std/primitive.bool.html /// [RFC 4291 errata 4406]: https://www.rfc-editor.org/errata/eid4406 /// [`is_unicast_link_local_strict()`]: ../../std/net/struct.Ipv6Addr.html#method.is_unicast_link_local_strict /// pub fn is_unicast_link_local(&self) -> bool { (self.segments()[0] & 0xffc0) == 0xfe80 } /// Returns [`true`] if this is a deprecated unicast site-local address /// (fec0::/10). /// Returns [`true`] if this is a deprecated unicast site-local address (fec0::/10). The /// unicast site-local address format is defined in [RFC 4291 section 2.5.7] as: /// /// ```no_rust /// | 10 | /// | bits | 54 bits | 64 bits | /// +----------+-------------------------+----------------------------+ /// |1111111011| subnet ID | interface ID | /// +----------+-------------------------+----------------------------+ /// ``` /// /// [`true`]: ../../std/primitive.bool.html /// [RFC 4291 section 2.5.7]: https://tools.ietf.org/html/rfc4291#section-2.5.7 /// /// # Examples ///", "commid": "rust_pr_60145"}], "negative_passages": []}
{"query_id": "q-en-rust-b1a925fd45697a656dc4325a4588180891695688f60bfac4eaa470a152fa105e", "query": "This code should be correct, because according to document, all address in are not global reachable. But it panic on assertion failed in latest nightly (2019-01-11) (see )\nRelevant thread:\nSorry, fixed", "positive_passages": [{"docid": "doc-en-rust-45b45d485bd88c1cfebb3dd425198a98de35ee847cdec01d0f1f8ed9e40bc913", "text": "/// /// - the loopback address /// - the link-local addresses /// - the (deprecated) site-local addresses /// - unique local addresses /// - the unspecified address /// - the address range reserved for documentation /// /// This method returns [`true`] for site-local addresses as per [RFC 4291 section 2.5.7] /// /// ```no_rust /// The special behavior of [the site-local unicast] prefix defined in [RFC3513] must no longer /// be supported in new implementations (i.e., new implementations must treat this prefix as /// Global Unicast). /// ``` /// /// [`true`]: ../../std/primitive.bool.html /// [RFC 4291 section 2.5.7]: https://tools.ietf.org/html/rfc4291#section-2.5.7 /// /// # Examples ///", "commid": "rust_pr_60145"}], "negative_passages": []}
{"query_id": "q-en-rust-b1a925fd45697a656dc4325a4588180891695688f60bfac4eaa470a152fa105e", "query": "This code should be correct, because according to document, all address in are not global reachable. But it panic on assertion failed in latest nightly (2019-01-11) (see )\nRelevant thread:\nSorry, fixed", "positive_passages": [{"docid": "doc-en-rust-80849e3257b31a1c46b0dd3c65319462b769c21112f05064daf23f95c5002106", "text": "/// ``` pub fn is_unicast_global(&self) -> bool { !self.is_multicast() && !self.is_loopback() && !self.is_unicast_link_local() && !self.is_unicast_site_local() && !self.is_unique_local() && !self.is_unspecified() && !self.is_documentation() && !self.is_loopback() && !self.is_unicast_link_local() && !self.is_unique_local() && !self.is_unspecified() && !self.is_documentation() } /// Returns the address's multicast scope if the address is multicast.", "commid": "rust_pr_60145"}], "negative_passages": []}
{"query_id": "q-en-rust-a7e11b03b0f0bcc4af442d9d27aaaf0e45a436a547a9b34e6eabb32a8e91593c", "query": "Minimised, reproducible example: See Backtrace\nHello I'm getting the same ICE, but without any unstable features. I don't have a minimized example, though, but it may be another data point when searching for the problem. It's this commit: The commit happens with The code is not correct, it wouldn't compile even if the compiler didn't panic. It doesn't happen on stable, but I guess it's because it bails out sooner on closures taking references:\nThis still produces an error on the latest nightly (not sure if it's supposed to compile or not), but no longer ICEs.", "positive_passages": [{"docid": "doc-en-rust-01550d3a62c1f8dbf3a85e641a0aa43e97558c507a9e31fbb9a9fb04365f8a13", "text": " // Regression test for issue #57611 // Ensures that we don't ICE // FIXME: This should compile, but it currently doesn't #![feature(trait_alias)] #![feature(type_alias_impl_trait)] trait Foo { type Bar: Baz version = \"0.1.32\" version = \"0.1.35\" source = \"registry+https://github.com/rust-lang/crates.io-index\" checksum = \"7bc4ac2c824d2bfc612cba57708198547e9a26943af0632aff033e0693074d5c\" checksum = \"e3fcd8aba10d17504c87ef12d4f62ef404c6a4703d16682a9eb5543e6cf24455\" dependencies = [ \"cc\", \"rustc-std-workspace-core\",", "commid": "rust_pr_75877"}], "negative_passages": []}
{"query_id": "q-en-rust-109bb0755956bb23810de87c91ae495632a4dd5f006f25cb017913d5b4abf014", "query": "When building a statically-linked Rust binary crate for the target, linking a crate fails with the following error: Expected result: the crate successfully compiles. Actual result: the crate fails to compile, with an error during the linking stage. Platform: Ubuntu 18.04, x8664. Output of : MUSL standard C library version: 1.1.20. C compiler: MIPS GCC version 7.3, provided by Ubuntu 18.04 (). I tried searching the open and closed issues for this project, and couldn't find any others exhibiting this same error. Forgive me if this is the wrong place to file this bug, but I'm not entirely sure how to debug this problem, so I'm starting here with Rust. The file in makes a call to on big-endian hosts (). This function gets translated into the symbol for the MIPS target, and should be present in the compiler's built-in library, but it appears it's absent here. Now, I can see that uses GCC 5.3 from the OpenWRT toolchain. I'm not entirely sure why this should give any different results from using the GCC 7.3 package provided by Ubuntu 18.04 (, which provides GCC 5.5, and the results are the same (fails to link due to undefined reference to ). Here is the full error message from trying to build the crate: An easy way to reproduce this is to create a with the following contents, and then build it with :\nThe necessary functions should be to the .\nThis does look like an interesting idea, but when trying to compile with that crate, I get lots of errors of this sort: You should be able to recreate this by adding the following line to the I mentioned in the original issue, just before the line at the end: I'm currently working on a project that I'd like to keep on the stable channel. Will this crate not work at all on the stable channel?\nmodify labels: +O-musl\npossibly related to\nAny updates with this issue? Have the same problem\nAdding \"-C\", \"link-args=-lgcc\" to re-add libgcc removed by -nostdlib to rustargs fixes the problem, at least for the simple hello world. Or you could implement __bswapsi2 yourself. I'm guessing that openwrt toolchain targets mips32 as opposed of mips32r2, so it requires \"libcall\" for bswap. Hope it helps.\nIt works, thanks!", "positive_passages": [{"docid": "doc-en-rust-e72f6299730b09657cdb8486e8ea3a46f30c134469ba3a9cc4e92e9c15e09d56", "text": "panic_abort = { path = \"../panic_abort\" } core = { path = \"../core\" } libc = { version = \"0.2.74\", default-features = false, features = ['rustc-dep-of-std'] } compiler_builtins = { version = \"0.1.32\" } compiler_builtins = { version = \"0.1.35\" } profiler_builtins = { path = \"../profiler_builtins\", optional = true } unwind = { path = \"../unwind\" } hashbrown = { version = \"0.8.1\", default-features = false, features = ['rustc-dep-of-std'] }", "commid": "rust_pr_75877"}], "negative_passages": []}
{"query_id": "q-en-rust-62b082ff3d00c0d5541a844fa9bf9d06cfd45dbe30ecf935020db0e6c99d1321", "query": "Encountered this when working on test cases for an RFC: cc This seems related to other issues.\nI'm confused. Either this was fixed since yesterday's nightly (it still ICEs on nightly but works locally on master), or something really odd is going on\nIs that the upstream master? Otherwise maybe someone merged a PR or something..\nThis still ICEs on nightly.\nThis now compiles successfully on the latest nightly.", "positive_passages": [{"docid": "doc-en-rust-ae0f1dd513fac11ddd5092a628d19cb1656c4089f6a66d30c55e936aae563cc7", "text": " // check-pass #![feature(existential_type)] existential type A: Iterator; fn def_a() -> A { 0..1 } pub fn use_a() { def_a().map(|x| x); } fn main() {} ", "commid": "rust_pr_63158"}], "negative_passages": []}
{"query_id": "q-en-rust-040ce4823fe42b127f254e4eb557b71e51fbaf32af61e9709865b31985195f41", "query": "When using this branch of bitflags: through a patch in a local directory rls crashes with the following errors:\ni had enabled dev overrides for dependencies in using: removing that out the output from rls is now the normal debug flags but still the same errors:\nThis is now causing widespread ICEs in the RLS. Nominating to get this fixed (maybe has an idea?). See for a simpler reproducer.\nI encountered this issue trying out which causes rustc to crash on when invoked by RLS. I can confirm it's the exact same backtrace as\nI ran into this trying out on Windows 10. RLS inside vscode ends up panicking with this backtrace: (note for repro: I had changed the default feature in tui-rs to be \"crossterm\" because the default feature \"termion\" doesn't compile on windows - but after the termion compile errors, rustc was giving me a similar panic anyway)\nJFYI I could work around this problem by adding to my\nI tried that with the tui-rs (it had ) but same panic. [Edit] you're right, I had missed the inside the version.\nThanks !\nRunning into the same issue when trying to build Clap\nSame issue in vscode on linuxmint and rust 1.34.1 when trying to build glib (dependency of gtk).\nThere is no need to list every crate which depends on , it only clutters the thread and makes less visible.\nHere's a small repro, extracted from what is doing: I'm struggling to get a repro without the compile error, though.\nThank you for such a small repro! I\u2019ll take a closer look tomorrow and see what may be causing that. On Tue, 7 May 2019 at 22:32, Sean Gillespie <:\nLooks like isn't written back for a resolution error? That still repros without , right?\nCorrect, it does. Further minimized repro:\nInteresting. I managed to reduce it to: with: Any other combination is okay, it has to be under under and it has to be combined with a field access (bare unknown identifier doesn't ICE). I'm only guessing but it seems that it tries to emplace def 'path' under a which seems to skip a def path segment?\nI think this is confirmed by A, what I believe is, more accurate guess is that we somehow don't correctly nest appropriate typeck tables when visiting .\nOh, you need to have one table per body: look in HIR for fields and map all of those cases back to the AST. This makes a lot more sense now: so is not wrong, but is looking in the wrong place.\nWhat about this: Seems like we need to before walking the expression of the associated const?\nHm, probably! I imagined the problem is we don't nest it before visiting trait items (hence missing segment) but you may be right! Let's see :sweat_smile:\nWith the fix applied this still ICEs on (this time for type) which means we should nest tables for as well", "positive_passages": [{"docid": "doc-en-rust-e7bbf5f78ef1850d8eb1d35e35864a2fe23e4d2854334bf96f228b0a71142e27", "text": "} // walk type and init value self.visit_ty(typ); if let Some(expr) = expr { self.visit_expr(expr); } self.nest_tables(id, |v| { v.visit_ty(typ); if let Some(expr) = expr { v.visit_expr(expr); } }); } // FIXME tuple structs should generate tuple-specific data.", "commid": "rust_pr_60649"}], "negative_passages": []}
{"query_id": "q-en-rust-040ce4823fe42b127f254e4eb557b71e51fbaf32af61e9709865b31985195f41", "query": "When using this branch of bitflags: through a patch in a local directory rls crashes with the following errors:\ni had enabled dev overrides for dependencies in using: removing that out the output from rls is now the normal debug flags but still the same errors:\nThis is now causing widespread ICEs in the RLS. Nominating to get this fixed (maybe has an idea?). See for a simpler reproducer.\nI encountered this issue trying out which causes rustc to crash on when invoked by RLS. I can confirm it's the exact same backtrace as\nI ran into this trying out on Windows 10. RLS inside vscode ends up panicking with this backtrace: (note for repro: I had changed the default feature in tui-rs to be \"crossterm\" because the default feature \"termion\" doesn't compile on windows - but after the termion compile errors, rustc was giving me a similar panic anyway)\nJFYI I could work around this problem by adding to my\nI tried that with the tui-rs (it had ) but same panic. [Edit] you're right, I had missed the inside the version.\nThanks !\nRunning into the same issue when trying to build Clap\nSame issue in vscode on linuxmint and rust 1.34.1 when trying to build glib (dependency of gtk).\nThere is no need to list every crate which depends on , it only clutters the thread and makes less visible.\nHere's a small repro, extracted from what is doing: I'm struggling to get a repro without the compile error, though.\nThank you for such a small repro! I\u2019ll take a closer look tomorrow and see what may be causing that. On Tue, 7 May 2019 at 22:32, Sean Gillespie <:\nLooks like isn't written back for a resolution error? That still repros without , right?\nCorrect, it does. Further minimized repro:\nInteresting. I managed to reduce it to: with: Any other combination is okay, it has to be under under and it has to be combined with a field access (bare unknown identifier doesn't ICE). I'm only guessing but it seems that it tries to emplace def 'path' under a which seems to skip a def path segment?\nI think this is confirmed by A, what I believe is, more accurate guess is that we somehow don't correctly nest appropriate typeck tables when visiting .\nOh, you need to have one table per body: look in HIR for fields and map all of those cases back to the AST. This makes a lot more sense now: so is not wrong, but is looking in the wrong place.\nWhat about this: Seems like we need to before walking the expression of the associated const?\nHm, probably! I imagined the problem is we don't nest it before visiting trait items (hence missing segment) but you may be right! Let's see :sweat_smile:\nWith the fix applied this still ICEs on (this time for type) which means we should nest tables for as well", "positive_passages": [{"docid": "doc-en-rust-7c37d39505667b3457e3c30615165c38181eeda791f510297d6779a66af6da3a", "text": " // compile-flags: -Zsave-analysis // Check that this doesn't ICE when processing associated const (field expr). pub fn f() { trait Trait {} impl Trait { const FLAG: u32 = bogus.field; //~ ERROR cannot find value `bogus` } } fn main() {} ", "commid": "rust_pr_60649"}], "negative_passages": []}
{"query_id": "q-en-rust-040ce4823fe42b127f254e4eb557b71e51fbaf32af61e9709865b31985195f41", "query": "When using this branch of bitflags: through a patch in a local directory rls crashes with the following errors:\ni had enabled dev overrides for dependencies in using: removing that out the output from rls is now the normal debug flags but still the same errors:\nThis is now causing widespread ICEs in the RLS. Nominating to get this fixed (maybe has an idea?). See for a simpler reproducer.\nI encountered this issue trying out which causes rustc to crash on when invoked by RLS. I can confirm it's the exact same backtrace as\nI ran into this trying out on Windows 10. RLS inside vscode ends up panicking with this backtrace: (note for repro: I had changed the default feature in tui-rs to be \"crossterm\" because the default feature \"termion\" doesn't compile on windows - but after the termion compile errors, rustc was giving me a similar panic anyway)\nJFYI I could work around this problem by adding to my\nI tried that with the tui-rs (it had ) but same panic. [Edit] you're right, I had missed the inside the version.\nThanks !\nRunning into the same issue when trying to build Clap\nSame issue in vscode on linuxmint and rust 1.34.1 when trying to build glib (dependency of gtk).\nThere is no need to list every crate which depends on , it only clutters the thread and makes less visible.\nHere's a small repro, extracted from what is doing: I'm struggling to get a repro without the compile error, though.\nThank you for such a small repro! I\u2019ll take a closer look tomorrow and see what may be causing that. On Tue, 7 May 2019 at 22:32, Sean Gillespie <:\nLooks like isn't written back for a resolution error? That still repros without , right?\nCorrect, it does. Further minimized repro:\nInteresting. I managed to reduce it to: with: Any other combination is okay, it has to be under under and it has to be combined with a field access (bare unknown identifier doesn't ICE). I'm only guessing but it seems that it tries to emplace def 'path' under a which seems to skip a def path segment?\nI think this is confirmed by A, what I believe is, more accurate guess is that we somehow don't correctly nest appropriate typeck tables when visiting .\nOh, you need to have one table per body: look in HIR for fields and map all of those cases back to the AST. This makes a lot more sense now: so is not wrong, but is looking in the wrong place.\nWhat about this: Seems like we need to before walking the expression of the associated const?\nHm, probably! I imagined the problem is we don't nest it before visiting trait items (hence missing segment) but you may be right! Let's see :sweat_smile:\nWith the fix applied this still ICEs on (this time for type) which means we should nest tables for as well", "positive_passages": [{"docid": "doc-en-rust-b523cfd67ece5f2e6a8d67c450dd03609bee563d0fa77dd5e82575f00eb6a222", "text": " error[E0425]: cannot find value `bogus` in this scope --> $DIR/issue-59134-0.rs:8:27 | LL | const FLAG: u32 = bogus.field; | ^^^^^ not found in this scope error: aborting due to previous error For more information about this error, try `rustc --explain E0425`. ", "commid": "rust_pr_60649"}], "negative_passages": []}
{"query_id": "q-en-rust-040ce4823fe42b127f254e4eb557b71e51fbaf32af61e9709865b31985195f41", "query": "When using this branch of bitflags: through a patch in a local directory rls crashes with the following errors:\ni had enabled dev overrides for dependencies in using: removing that out the output from rls is now the normal debug flags but still the same errors:\nThis is now causing widespread ICEs in the RLS. Nominating to get this fixed (maybe has an idea?). See for a simpler reproducer.\nI encountered this issue trying out which causes rustc to crash on when invoked by RLS. I can confirm it's the exact same backtrace as\nI ran into this trying out on Windows 10. RLS inside vscode ends up panicking with this backtrace: (note for repro: I had changed the default feature in tui-rs to be \"crossterm\" because the default feature \"termion\" doesn't compile on windows - but after the termion compile errors, rustc was giving me a similar panic anyway)\nJFYI I could work around this problem by adding to my\nI tried that with the tui-rs (it had ) but same panic. [Edit] you're right, I had missed the inside the version.\nThanks !\nRunning into the same issue when trying to build Clap\nSame issue in vscode on linuxmint and rust 1.34.1 when trying to build glib (dependency of gtk).\nThere is no need to list every crate which depends on , it only clutters the thread and makes less visible.\nHere's a small repro, extracted from what is doing: I'm struggling to get a repro without the compile error, though.\nThank you for such a small repro! I\u2019ll take a closer look tomorrow and see what may be causing that. On Tue, 7 May 2019 at 22:32, Sean Gillespie <:\nLooks like isn't written back for a resolution error? That still repros without , right?\nCorrect, it does. Further minimized repro:\nInteresting. I managed to reduce it to: with: Any other combination is okay, it has to be under under and it has to be combined with a field access (bare unknown identifier doesn't ICE). I'm only guessing but it seems that it tries to emplace def 'path' under a which seems to skip a def path segment?\nI think this is confirmed by A, what I believe is, more accurate guess is that we somehow don't correctly nest appropriate typeck tables when visiting .\nOh, you need to have one table per body: look in HIR for fields and map all of those cases back to the AST. This makes a lot more sense now: so is not wrong, but is looking in the wrong place.\nWhat about this: Seems like we need to before walking the expression of the associated const?\nHm, probably! I imagined the problem is we don't nest it before visiting trait items (hence missing segment) but you may be right! Let's see :sweat_smile:\nWith the fix applied this still ICEs on (this time for type) which means we should nest tables for as well", "positive_passages": [{"docid": "doc-en-rust-43dfcf358157d3aea9b597b0480d9d21111bcb7def5cc0a258cd40113fec2089", "text": " // compile-flags: -Zsave-analysis // Check that this doesn't ICE when processing associated const (type). fn func() { trait Trait { type MyType; const CONST: Self::MyType = bogus.field; //~ ERROR cannot find value `bogus` } } fn main() {} ", "commid": "rust_pr_60649"}], "negative_passages": []}
{"query_id": "q-en-rust-040ce4823fe42b127f254e4eb557b71e51fbaf32af61e9709865b31985195f41", "query": "When using this branch of bitflags: through a patch in a local directory rls crashes with the following errors:\ni had enabled dev overrides for dependencies in using: removing that out the output from rls is now the normal debug flags but still the same errors:\nThis is now causing widespread ICEs in the RLS. Nominating to get this fixed (maybe has an idea?). See for a simpler reproducer.\nI encountered this issue trying out which causes rustc to crash on when invoked by RLS. I can confirm it's the exact same backtrace as\nI ran into this trying out on Windows 10. RLS inside vscode ends up panicking with this backtrace: (note for repro: I had changed the default feature in tui-rs to be \"crossterm\" because the default feature \"termion\" doesn't compile on windows - but after the termion compile errors, rustc was giving me a similar panic anyway)\nJFYI I could work around this problem by adding to my\nI tried that with the tui-rs (it had ) but same panic. [Edit] you're right, I had missed the inside the version.\nThanks !\nRunning into the same issue when trying to build Clap\nSame issue in vscode on linuxmint and rust 1.34.1 when trying to build glib (dependency of gtk).\nThere is no need to list every crate which depends on , it only clutters the thread and makes less visible.\nHere's a small repro, extracted from what is doing: I'm struggling to get a repro without the compile error, though.\nThank you for such a small repro! I\u2019ll take a closer look tomorrow and see what may be causing that. On Tue, 7 May 2019 at 22:32, Sean Gillespie <:\nLooks like isn't written back for a resolution error? That still repros without , right?\nCorrect, it does. Further minimized repro:\nInteresting. I managed to reduce it to: with: Any other combination is okay, it has to be under under and it has to be combined with a field access (bare unknown identifier doesn't ICE). I'm only guessing but it seems that it tries to emplace def 'path' under a which seems to skip a def path segment?\nI think this is confirmed by A, what I believe is, more accurate guess is that we somehow don't correctly nest appropriate typeck tables when visiting .\nOh, you need to have one table per body: look in HIR for fields and map all of those cases back to the AST. This makes a lot more sense now: so is not wrong, but is looking in the wrong place.\nWhat about this: Seems like we need to before walking the expression of the associated const?\nHm, probably! I imagined the problem is we don't nest it before visiting trait items (hence missing segment) but you may be right! Let's see :sweat_smile:\nWith the fix applied this still ICEs on (this time for type) which means we should nest tables for as well", "positive_passages": [{"docid": "doc-en-rust-2f5bc0c189246f20c47a42d98bf8c96ad23df9f1512c62e19e48a684ee31e93c", "text": " error[E0425]: cannot find value `bogus` in this scope --> $DIR/issue-59134-1.rs:8:37 | LL | const CONST: Self::MyType = bogus.field; | ^^^^^ not found in this scope error: aborting due to previous error For more information about this error, try `rustc --explain E0425`. ", "commid": "rust_pr_60649"}], "negative_passages": []}
{"query_id": "q-en-rust-0d0ad8eb07090379b80371ef9d83f447930eec7f8c315b24df253746743b42af", "query": "With specialization, it is possible to define a default associated type that does not fulfill its trait bounds. Here is a minimal example, that surely should not compile, but does with rustc 1.35.0-nightly: Unsurprisingly, adding the main function causes an ICE. Error message and backtrace (compiled at the playground):\nTriage: the latest nightly rejects the code, marking as E-needs-test.", "positive_passages": [{"docid": "doc-en-rust-a0f593b02f0805e6349ad0127582660c3c150be14b11b7324dd781c9a0c345df", "text": " // check-pass #![feature(impl_trait_in_bindings)] #![allow(incomplete_features)] struct A<'a>(&'a ()); trait Trait cx.per_local[IsNotPromotable].insert(local); cx.per_local[IsNotConst].insert(local); } LocalKind::Var if mode == Mode::Fn => {", "commid": "rust_pr_59724"}], "negative_passages": []}
{"query_id": "q-en-rust-0304faa8eaf49575dd4f6163f9e393715a9dacea3c627bf50710ce6fb9e9ddc4", "query": "Expected the following code to compile But getting Repro:\nAlso happens on beta, not on stable\nStable says:\nFor reference, the function signature as defined is exactly the same.\nThe actual ICE occurs first in nightly-2019-03-03 (...). But interesting enough, stable (as jonas pointed out) procuces an error, saying, that the third argument must be a constant. Since nightly-2019-02-16 (...) this does not emit an error anymore, but also does not ICE.\nI guess for the \"does not emit an error anymore\" can say something about it. should be responsible for it.\ntriage: P-high\nNote that because of the on the definition, you can't wrap it (without matching on and having 256 separate calls). I don't think this is intentional, but I'm not sure why it would happen, maybe arguments are treated incorrectly in the qualifier. cc\nSo I agree that this code can't compile, that's the expected behavior; if you want to wrap it, you have to use a macro - Rust functions aren't able to abstract over this type of hardware operations yet. The ICE is a regression over the error message that was being produced before. We should definitely add an ui test for that.\ntriage, assigning to self to fix the stable-to-beta regression.\nThe behavior here varies with edition as well as channel. edition 2018 ------------------ ------------------------ stable expected error beta unexpected compile-pass nightly-2019-04-05 ICE I think the most immediate thing to address is the unexpected compile-pass that has leaked into beta.\nThe easiest thing might be to revert\nI have an actual fix in (also easier to backport than a revert)\nOkay then I'm reassigning to", "positive_passages": [{"docid": "doc-en-rust-8c91de4f1e86ec79aeba509276b91de2b6d816f5adfa1a771623819ff8a32e56", "text": "} LocalKind::Temp if !temps[local].is_promotable() => { cx.per_local[IsNotPromotable].insert(local); cx.per_local[IsNotConst].insert(local); } _ => {}", "commid": "rust_pr_59724"}], "negative_passages": []}
{"query_id": "q-en-rust-0304faa8eaf49575dd4f6163f9e393715a9dacea3c627bf50710ce6fb9e9ddc4", "query": "Expected the following code to compile But getting Repro:\nAlso happens on beta, not on stable\nStable says:\nFor reference, the function signature as defined is exactly the same.\nThe actual ICE occurs first in nightly-2019-03-03 (...). But interesting enough, stable (as jonas pointed out) procuces an error, saying, that the third argument must be a constant. Since nightly-2019-02-16 (...) this does not emit an error anymore, but also does not ICE.\nI guess for the \"does not emit an error anymore\" can say something about it. should be responsible for it.\ntriage: P-high\nNote that because of the on the definition, you can't wrap it (without matching on and having 256 separate calls). I don't think this is intentional, but I'm not sure why it would happen, maybe arguments are treated incorrectly in the qualifier. cc\nSo I agree that this code can't compile, that's the expected behavior; if you want to wrap it, you have to use a macro - Rust functions aren't able to abstract over this type of hardware operations yet. The ICE is a regression over the error message that was being produced before. We should definitely add an ui test for that.\ntriage, assigning to self to fix the stable-to-beta regression.\nThe behavior here varies with edition as well as channel. edition 2018 ------------------ ------------------------ stable expected error beta unexpected compile-pass nightly-2019-04-05 ICE I think the most immediate thing to address is the unexpected compile-pass that has leaked into beta.\nThe easiest thing might be to revert\nI have an actual fix in (also easier to backport than a revert)\nOkay then I'm reassigning to", "positive_passages": [{"docid": "doc-en-rust-648c57c8c2c990ceb5e2bb743732d366e3f16d2b0f67ed6f6121a77903069326", "text": "} } // Ensure the `IsNotPromotable` qualification is preserved. // Ensure the `IsNotConst` qualification is preserved. // NOTE(eddyb) this is actually unnecessary right now, as // we never replace the local's qualif, but we might in // the future, and so it serves to catch changes that unset", "commid": "rust_pr_59724"}], "negative_passages": []}
{"query_id": "q-en-rust-0304faa8eaf49575dd4f6163f9e393715a9dacea3c627bf50710ce6fb9e9ddc4", "query": "Expected the following code to compile But getting Repro:\nAlso happens on beta, not on stable\nStable says:\nFor reference, the function signature as defined is exactly the same.\nThe actual ICE occurs first in nightly-2019-03-03 (...). But interesting enough, stable (as jonas pointed out) procuces an error, saying, that the third argument must be a constant. Since nightly-2019-02-16 (...) this does not emit an error anymore, but also does not ICE.\nI guess for the \"does not emit an error anymore\" can say something about it. should be responsible for it.\ntriage: P-high\nNote that because of the on the definition, you can't wrap it (without matching on and having 256 separate calls). I don't think this is intentional, but I'm not sure why it would happen, maybe arguments are treated incorrectly in the qualifier. cc\nSo I agree that this code can't compile, that's the expected behavior; if you want to wrap it, you have to use a macro - Rust functions aren't able to abstract over this type of hardware operations yet. The ICE is a regression over the error message that was being produced before. We should definitely add an ui test for that.\ntriage, assigning to self to fix the stable-to-beta regression.\nThe behavior here varies with edition as well as channel. edition 2018 ------------------ ------------------------ stable expected error beta unexpected compile-pass nightly-2019-04-05 ICE I think the most immediate thing to address is the unexpected compile-pass that has leaked into beta.\nThe easiest thing might be to revert\nI have an actual fix in (also easier to backport than a revert)\nOkay then I'm reassigning to", "positive_passages": [{"docid": "doc-en-rust-4e5f3f52ce4342419bc2f46b69b500f3073fbf6c8d63caf22405a65fe71b3d5f", "text": "// be replaced with calling `insert` to re-set the bit). if kind == LocalKind::Temp { if !self.temp_promotion_state[index].is_promotable() { assert!(self.cx.per_local[IsNotPromotable].contains(index)); assert!(self.cx.per_local[IsNotConst].contains(index)); } } }", "commid": "rust_pr_59724"}], "negative_passages": []}
{"query_id": "q-en-rust-0304faa8eaf49575dd4f6163f9e393715a9dacea3c627bf50710ce6fb9e9ddc4", "query": "Expected the following code to compile But getting Repro:\nAlso happens on beta, not on stable\nStable says:\nFor reference, the function signature as defined is exactly the same.\nThe actual ICE occurs first in nightly-2019-03-03 (...). But interesting enough, stable (as jonas pointed out) procuces an error, saying, that the third argument must be a constant. Since nightly-2019-02-16 (...) this does not emit an error anymore, but also does not ICE.\nI guess for the \"does not emit an error anymore\" can say something about it. should be responsible for it.\ntriage: P-high\nNote that because of the on the definition, you can't wrap it (without matching on and having 256 separate calls). I don't think this is intentional, but I'm not sure why it would happen, maybe arguments are treated incorrectly in the qualifier. cc\nSo I agree that this code can't compile, that's the expected behavior; if you want to wrap it, you have to use a macro - Rust functions aren't able to abstract over this type of hardware operations yet. The ICE is a regression over the error message that was being produced before. We should definitely add an ui test for that.\ntriage, assigning to self to fix the stable-to-beta regression.\nThe behavior here varies with edition as well as channel. edition 2018 ------------------ ------------------------ stable expected error beta unexpected compile-pass nightly-2019-04-05 ICE I think the most immediate thing to address is the unexpected compile-pass that has leaked into beta.\nThe easiest thing might be to revert\nI have an actual fix in (also easier to backport than a revert)\nOkay then I'm reassigning to", "positive_passages": [{"docid": "doc-en-rust-957045bed7caa6954e23010d28385b4534243204450b66195112a6c31071f59d", "text": " // only-x86_64 #[cfg(target_arch = \"x86\")] use std::arch::x86::*; #[cfg(target_arch = \"x86_64\")] use std::arch::x86_64::*; unsafe fn pclmul(a: __m128i, b: __m128i) -> __m128i { let imm8 = 3; _mm_clmulepi64_si128(a, b, imm8) //~ ERROR argument 3 is required to be a constant } fn main() {} ", "commid": "rust_pr_59724"}], "negative_passages": []}
{"query_id": "q-en-rust-0304faa8eaf49575dd4f6163f9e393715a9dacea3c627bf50710ce6fb9e9ddc4", "query": "Expected the following code to compile But getting Repro:\nAlso happens on beta, not on stable\nStable says:\nFor reference, the function signature as defined is exactly the same.\nThe actual ICE occurs first in nightly-2019-03-03 (...). But interesting enough, stable (as jonas pointed out) procuces an error, saying, that the third argument must be a constant. Since nightly-2019-02-16 (...) this does not emit an error anymore, but also does not ICE.\nI guess for the \"does not emit an error anymore\" can say something about it. should be responsible for it.\ntriage: P-high\nNote that because of the on the definition, you can't wrap it (without matching on and having 256 separate calls). I don't think this is intentional, but I'm not sure why it would happen, maybe arguments are treated incorrectly in the qualifier. cc\nSo I agree that this code can't compile, that's the expected behavior; if you want to wrap it, you have to use a macro - Rust functions aren't able to abstract over this type of hardware operations yet. The ICE is a regression over the error message that was being produced before. We should definitely add an ui test for that.\ntriage, assigning to self to fix the stable-to-beta regression.\nThe behavior here varies with edition as well as channel. edition 2018 ------------------ ------------------------ stable expected error beta unexpected compile-pass nightly-2019-04-05 ICE I think the most immediate thing to address is the unexpected compile-pass that has leaked into beta.\nThe easiest thing might be to revert\nI have an actual fix in (also easier to backport than a revert)\nOkay then I'm reassigning to", "positive_passages": [{"docid": "doc-en-rust-fa8cbcbd13cb60581fb25ed391a8c697225afec8a0f9f862117068de62e697c9", "text": " error: argument 3 is required to be a constant --> $DIR/const_arg_local.rs:10:5 | LL | _mm_clmulepi64_si128(a, b, imm8) | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ error: aborting due to previous error ", "commid": "rust_pr_59724"}], "negative_passages": []}
{"query_id": "q-en-rust-0304faa8eaf49575dd4f6163f9e393715a9dacea3c627bf50710ce6fb9e9ddc4", "query": "Expected the following code to compile But getting Repro:\nAlso happens on beta, not on stable\nStable says:\nFor reference, the function signature as defined is exactly the same.\nThe actual ICE occurs first in nightly-2019-03-03 (...). But interesting enough, stable (as jonas pointed out) procuces an error, saying, that the third argument must be a constant. Since nightly-2019-02-16 (...) this does not emit an error anymore, but also does not ICE.\nI guess for the \"does not emit an error anymore\" can say something about it. should be responsible for it.\ntriage: P-high\nNote that because of the on the definition, you can't wrap it (without matching on and having 256 separate calls). I don't think this is intentional, but I'm not sure why it would happen, maybe arguments are treated incorrectly in the qualifier. cc\nSo I agree that this code can't compile, that's the expected behavior; if you want to wrap it, you have to use a macro - Rust functions aren't able to abstract over this type of hardware operations yet. The ICE is a regression over the error message that was being produced before. We should definitely add an ui test for that.\ntriage, assigning to self to fix the stable-to-beta regression.\nThe behavior here varies with edition as well as channel. edition 2018 ------------------ ------------------------ stable expected error beta unexpected compile-pass nightly-2019-04-05 ICE I think the most immediate thing to address is the unexpected compile-pass that has leaked into beta.\nThe easiest thing might be to revert\nI have an actual fix in (also easier to backport than a revert)\nOkay then I'm reassigning to", "positive_passages": [{"docid": "doc-en-rust-1173f3983829635337afc8e30bcbdc1a6aecd3a41ef79ff8838a5dd48b50153a", "text": " // only-x86_64 #[cfg(target_arch = \"x86\")] use std::arch::x86::*; #[cfg(target_arch = \"x86_64\")] use std::arch::x86_64::*; unsafe fn pclmul(a: __m128i, b: __m128i) -> __m128i { _mm_clmulepi64_si128(a, b, *&mut 42) //~ ERROR argument 3 is required to be a constant } fn main() {} ", "commid": "rust_pr_59724"}], "negative_passages": []}
{"query_id": "q-en-rust-0304faa8eaf49575dd4f6163f9e393715a9dacea3c627bf50710ce6fb9e9ddc4", "query": "Expected the following code to compile But getting Repro:\nAlso happens on beta, not on stable\nStable says:\nFor reference, the function signature as defined is exactly the same.\nThe actual ICE occurs first in nightly-2019-03-03 (...). But interesting enough, stable (as jonas pointed out) procuces an error, saying, that the third argument must be a constant. Since nightly-2019-02-16 (...) this does not emit an error anymore, but also does not ICE.\nI guess for the \"does not emit an error anymore\" can say something about it. should be responsible for it.\ntriage: P-high\nNote that because of the on the definition, you can't wrap it (without matching on and having 256 separate calls). I don't think this is intentional, but I'm not sure why it would happen, maybe arguments are treated incorrectly in the qualifier. cc\nSo I agree that this code can't compile, that's the expected behavior; if you want to wrap it, you have to use a macro - Rust functions aren't able to abstract over this type of hardware operations yet. The ICE is a regression over the error message that was being produced before. We should definitely add an ui test for that.\ntriage, assigning to self to fix the stable-to-beta regression.\nThe behavior here varies with edition as well as channel. edition 2018 ------------------ ------------------------ stable expected error beta unexpected compile-pass nightly-2019-04-05 ICE I think the most immediate thing to address is the unexpected compile-pass that has leaked into beta.\nThe easiest thing might be to revert\nI have an actual fix in (also easier to backport than a revert)\nOkay then I'm reassigning to", "positive_passages": [{"docid": "doc-en-rust-f912e43b3bde797d4f3eec09ec9048dc9cd001b3aa02e2e775c664d60c94a619", "text": " error: argument 3 is required to be a constant --> $DIR/const_arg_promotable.rs:9:5 | LL | _mm_clmulepi64_si128(a, b, *&mut 42) | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ error: aborting due to previous error ", "commid": "rust_pr_59724"}], "negative_passages": []}
{"query_id": "q-en-rust-0304faa8eaf49575dd4f6163f9e393715a9dacea3c627bf50710ce6fb9e9ddc4", "query": "Expected the following code to compile But getting Repro:\nAlso happens on beta, not on stable\nStable says:\nFor reference, the function signature as defined is exactly the same.\nThe actual ICE occurs first in nightly-2019-03-03 (...). But interesting enough, stable (as jonas pointed out) procuces an error, saying, that the third argument must be a constant. Since nightly-2019-02-16 (...) this does not emit an error anymore, but also does not ICE.\nI guess for the \"does not emit an error anymore\" can say something about it. should be responsible for it.\ntriage: P-high\nNote that because of the on the definition, you can't wrap it (without matching on and having 256 separate calls). I don't think this is intentional, but I'm not sure why it would happen, maybe arguments are treated incorrectly in the qualifier. cc\nSo I agree that this code can't compile, that's the expected behavior; if you want to wrap it, you have to use a macro - Rust functions aren't able to abstract over this type of hardware operations yet. The ICE is a regression over the error message that was being produced before. We should definitely add an ui test for that.\ntriage, assigning to self to fix the stable-to-beta regression.\nThe behavior here varies with edition as well as channel. edition 2018 ------------------ ------------------------ stable expected error beta unexpected compile-pass nightly-2019-04-05 ICE I think the most immediate thing to address is the unexpected compile-pass that has leaked into beta.\nThe easiest thing might be to revert\nI have an actual fix in (also easier to backport than a revert)\nOkay then I'm reassigning to", "positive_passages": [{"docid": "doc-en-rust-408de29b4563462d366c394ff5c2cd7d18e98909957a2a1865c91a1ab194f440", "text": " // only-x86_64 #[cfg(target_arch = \"x86\")] use std::arch::x86::*; #[cfg(target_arch = \"x86_64\")] use std::arch::x86_64::*; unsafe fn pclmul(a: __m128i, b: __m128i, imm8: i32) -> __m128i { _mm_clmulepi64_si128(a, b, imm8) //~ ERROR argument 3 is required to be a constant } fn main() {} ", "commid": "rust_pr_59724"}], "negative_passages": []}
{"query_id": "q-en-rust-0304faa8eaf49575dd4f6163f9e393715a9dacea3c627bf50710ce6fb9e9ddc4", "query": "Expected the following code to compile But getting Repro:\nAlso happens on beta, not on stable\nStable says:\nFor reference, the function signature as defined is exactly the same.\nThe actual ICE occurs first in nightly-2019-03-03 (...). But interesting enough, stable (as jonas pointed out) procuces an error, saying, that the third argument must be a constant. Since nightly-2019-02-16 (...) this does not emit an error anymore, but also does not ICE.\nI guess for the \"does not emit an error anymore\" can say something about it. should be responsible for it.\ntriage: P-high\nNote that because of the on the definition, you can't wrap it (without matching on and having 256 separate calls). I don't think this is intentional, but I'm not sure why it would happen, maybe arguments are treated incorrectly in the qualifier. cc\nSo I agree that this code can't compile, that's the expected behavior; if you want to wrap it, you have to use a macro - Rust functions aren't able to abstract over this type of hardware operations yet. The ICE is a regression over the error message that was being produced before. We should definitely add an ui test for that.\ntriage, assigning to self to fix the stable-to-beta regression.\nThe behavior here varies with edition as well as channel. edition 2018 ------------------ ------------------------ stable expected error beta unexpected compile-pass nightly-2019-04-05 ICE I think the most immediate thing to address is the unexpected compile-pass that has leaked into beta.\nThe easiest thing might be to revert\nI have an actual fix in (also easier to backport than a revert)\nOkay then I'm reassigning to", "positive_passages": [{"docid": "doc-en-rust-93605bb5b0a72aefda7c5c2fd7c6b71e8fd3240564e7b28c4d8aa2c97f555041", "text": " error: argument 3 is required to be a constant --> $DIR/const_arg_wrapper.rs:9:5 | LL | _mm_clmulepi64_si128(a, b, imm8) | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ error: aborting due to previous error ", "commid": "rust_pr_59724"}], "negative_passages": []}
{"query_id": "q-en-rust-cf7666fa3b3e5b6f70463b09e7f7979217170ef918eec8e9f6a707ff2a596bec", "query": "Discovered a compiler panic, thought it might relate to , but was told to post a new issue. In minimizing, it seems the generic types are required to trigger the panic, I was unable to reproduce when inlining t7p or t8n. This code panics on 1.33 - 1.35, but does not panic on 1.32 (): On 1.32: On 1.35 nightly: --Bryan\ntriage: P-high. Removing nominated tag since there's little to discuss beyond its P-highness. (The fact its unassigned will be discussed regardless of nomination state.)\nBisection points to i.e. as the culprit. The backtrace in that commit looks very similar to the one reported here, and the parent commit () gives the same compiler error.\nI spent some time looking at this a few weeks ago but did not manage to find anything conclusive. At this point I'm going to be taking a break for a few months, so I am unassigning myself from this issue.\ntriage: assigning self.\ntriage: downgrading to P-medium; this does not warrant revisiting every week.\nI was able to minify it a bit further:\nTriage: It's no longer ICE with the latest nightly.", "positive_passages": [{"docid": "doc-en-rust-b473a1afe3e65265270067683b1b0150271669aa2426a64446f223d447f2930a", "text": " // check-pass trait Mirror { type Other; } #[derive(Debug)] struct Even(usize); struct Odd; impl Mirror for Even { type Other = Odd; } impl Mirror for Odd { type Other = Even; } trait Dyn use errors::DiagnosticBuilder; use errors::{Applicability, DiagnosticBuilder}; use syntax::ast::{self, *}; use syntax::source_map::Spanned; use syntax::ext::base::*; use syntax::ext::build::AstBuilder; use syntax::parse::token; use syntax::parse::parser::Parser; use syntax::print::pprust; use syntax::ptr::P; use syntax::symbol::Symbol;", "commid": "rust_pr_60039"}], "negative_passages": []}
{"query_id": "q-en-rust-4f0ae367abf9aab15a115ea33a3c3dcad49cf0f695cb1218708a7abb3b08a279", "query": "() Output: Version: 1.34.0 1.35.0 2019-04-15 nightly () Expected: Original report:\n26.0 is the first version that accepts this. 1.25.0 rejects it with:\n(not really a regression since more code now compiles)\nAdding T-compiler since is a built-in.\nOpened \u2014it should fix the issue.\ntriage: P-medium. I'm going to assume we don't need any future-compat machinery for a bug of this nature, unless the crater run on PR indicates that there are occurrences of this mis-use of in the wild.", "positive_passages": [{"docid": "doc-en-rust-98be43ee7c4ace576dcd3d529967d0dab7e316963b38529808ae4367fc3b28c5", "text": "return Err(err); } Ok(Assert { cond_expr: parser.parse_expr()?, custom_message: if parser.eat(&token::Comma) { let ts = parser.parse_tokens(); if !ts.is_empty() { Some(ts) } else { None } } else { None }, }) let cond_expr = parser.parse_expr()?; // Some crates use the `assert!` macro in the following form (note extra semicolon): // // assert!( // my_function(); // ); // // Warn about semicolon and suggest removing it. Eventually, this should be turned into an // error. if parser.token == token::Semi { let mut err = cx.struct_span_warn(sp, \"macro requires an expression as an argument\"); err.span_suggestion( parser.span, \"try removing semicolon\", String::new(), Applicability::MaybeIncorrect ); err.note(\"this is going to be an error in the future\"); err.emit(); parser.bump(); } // Some crates use the `assert!` macro in the following form (note missing comma before // message): // // assert!(true \"error message\"); // // Parse this as an actual message, and suggest inserting a comma. Eventually, this should be // turned into an error. let custom_message = if let token::Literal(token::Lit::Str_(_), _) = parser.token { let mut err = cx.struct_span_warn(parser.span, \"unexpected string literal\"); let comma_span = cx.source_map().next_point(parser.prev_span); err.span_suggestion_short( comma_span, \"try adding a comma\", \", \".to_string(), Applicability::MaybeIncorrect ); err.note(\"this is going to be an error in the future\"); err.emit(); parse_custom_message(&mut parser) } else if parser.eat(&token::Comma) { parse_custom_message(&mut parser) } else { None }; if parser.token != token::Eof { parser.expect_one_of(&[], &[])?; unreachable!(); } Ok(Assert { cond_expr, custom_message }) } fn parse_custom_message<'a>(parser: &mut Parser<'a>) -> Option for a in arguments { for (i, a) in arguments.iter().enumerate() { use visit::Visitor; if let Some(arg) = &a.arg { this.visit_ty(&arg.ty); } else { this.visit_ty(&decl.inputs[i].ty); } }", "commid": "rust_pr_60527"}], "negative_passages": []}
{"query_id": "q-en-rust-2bd015bcbdef3c2939019a5e62cfd0e368e124a5fabbc2ef06ed3321bd6074d2", "query": "I was using nightly rust extensively for its async/await and specialization feature. After a recent update (nightly-2019-05-02), a new ICE introduced with following logs: After a quick search it seems like is the only in direct cause (stack frame 10). ~I cannot share the code or produce a minimal case because the code base is rather big.~\nI've the minimal case for reproducing. Hope that would help!\nThanks! Looks like even this line is enough to trigger the ICE:\n: rustc complains you should have variable mut although it is already mut.\nThat's and should be fixed in the next nightly\ncc it looks like the desugaring may have broken in argument-position?\nOops, sorry. I'll take a look.\nDenominating since already has a fix\nThanks guys! This is a super fast fix!", "positive_passages": [{"docid": "doc-en-rust-2c7c2d2d47b4659ef0926f3032e01b9b4b9625448aec7f00cd5533878fe76407", "text": " // compile-pass // edition:2018 #![feature(async_await)] // This is a regression test to ensure that simple bindings (where replacement arguments aren't // created during async fn lowering) that have their DefId used during HIR lowering (such as impl // trait) are visited during def collection and thus have a DefId. async fn foo(ws: impl Iterator bug!(\"unexpected const parent path def {:?}\", x); tcx.sess.delay_span_bug( DUMMY_SP, &format!( \"unexpected const parent path def {:?}\", x ), ); tcx.types.err } } }", "commid": "rust_pr_60710"}], "negative_passages": []}
{"query_id": "q-en-rust-0430da44e5f0c0cd5ec3e6ab63520772d61c140d8254453f233ecda5e0243e82", "query": "Following code sample causes a compiler panic. should be .\nAlso happens on nightly. Likely due to const generics changes (cc\nMinimal reproduction:\nAnother case:", "positive_passages": [{"docid": "doc-en-rust-e3aca3629cb79476ab44ed412b8d6e3e0358ade864bfbcc9e4ae073a809604f6", "text": "if !fail { return None; } bug!(\"unexpected const parent path {:?}\", x); tcx.sess.delay_span_bug( DUMMY_SP, &format!( \"unexpected const parent path {:?}\", x ), ); tcx.types.err } } }", "commid": "rust_pr_60710"}], "negative_passages": []}
{"query_id": "q-en-rust-0430da44e5f0c0cd5ec3e6ab63520772d61c140d8254453f233ecda5e0243e82", "query": "Following code sample causes a compiler panic. should be .\nAlso happens on nightly. Likely due to const generics changes (cc\nMinimal reproduction:\nAnother case:", "positive_passages": [{"docid": "doc-en-rust-db81a1c26ac25aac70c3ce011770327849ed73264a44b2dd7ae807be7239af72", "text": "if !fail { return None; } bug!(\"unexpected const parent in type_of_def_id(): {:?}\", x); tcx.sess.delay_span_bug( DUMMY_SP, &format!( \"unexpected const parent in type_of_def_id(): {:?}\", x ), ); tcx.types.err } } }", "commid": "rust_pr_60710"}], "negative_passages": []}
{"query_id": "q-en-rust-0430da44e5f0c0cd5ec3e6ab63520772d61c140d8254453f233ecda5e0243e82", "query": "Following code sample causes a compiler panic. should be .\nAlso happens on nightly. Likely due to const generics changes (cc\nMinimal reproduction:\nAnother case:", "positive_passages": [{"docid": "doc-en-rust-626559eb02689c187aba1f0140e4bfcf5a4b68a2bf73ec33a51c148f6d4a9afa", "text": " #![feature(const_generics)] //~^ WARN the feature `const_generics` is incomplete and may cause the compiler to crash // We should probably be able to infer the types here. However, this test is checking that we don't // get an ICE in this case. It may be modified later to not be an error. struct Foo where F: Fn(&str, usize, usize) -> String where F: Fn(&str, usize, usize) -> Result return Ok(extract_source(src, start_index, end_index)); return extract_source(src, start_index, end_index); } else if let Some(src) = local_begin.sf.external_src.borrow().get_source() { return Ok(extract_source(src, start_index, end_index)); return extract_source(src, start_index, end_index); } else { return Err(SpanSnippetError::SourceNotAvailable { filename: local_begin.sf.name.clone()", "commid": "rust_pr_63508"}], "negative_passages": []}
{"query_id": "q-en-rust-dd9d01e655fbfaa6c4210cf26f6453018b0127bf7f305f523aec0c305ef038ea", "query": "When I used the character in comment, combined with some (invalid) code preceding it, the compiler panicked. I tried this code: I expected to see this happen: rustc compiles this code. Instead, this happened: rustc panicked with the following output : Backtrace:\nRegressed in 1.34.0. Used to be:\ncc\nMiminized: Probably some span handcrafting?\nProbably some span handcrafting? +1\nI was idly curious, so I took a brief look at this. The diagnostic that triggers this was introduced in , but I don't think it is responsible for the error (it's just stepping through characters looking for a closing brace). There is some strange math in (introduced in ). It oscillates between zero-length and 1+ length spans as it is stepping over characters. When it hits a multi-byte character, it returns an invalid span. Example: Starting with a span of (0,1), using would result in (1,1), then (1,2), then (2,2), then (2,4). That last one slices into the middle of the multi-byte character (because of , where width is 3), causing a panic. I'm not familiar enough with this stuff to say how it should behave. It seems like is being used for two different purposes (getting the next zero-length span, and getting the \"next\" character), and it's not clear to me what the behavior should be since it is used in a variety of places. I don't think it can be changed to always return zero-length spans, and it can't be changed to always return non-zero length spans. Perhaps there should be two different functions?\ntriage: P-medium, removing nomination; the ICE to me appears to include enough information for a user to identify the offending character and work-around the problem (namely by getting rid of the character). I do agree with that there should probably be two distinct functions corresponding to the two distinct purposes that they identified for the current callers of .\nMore test cases have been brought up in\nI guess this is the same bug, but the examples I've seen here concern non-ASCII characters in comments. I'm getting a similar rustc panic from an erroneous code without comments: Compilation attempt:\nPlease file a new issue and reference this one.\nI think it's fixed, try .\nconfirmed fixed by .", "positive_passages": [{"docid": "doc-en-rust-47826f27899f6e4cca1449ecea3aef4b40c2eeee2d6086c7a142629db4af66ad", "text": "/// Returns the source snippet as `String` corresponding to the given `Span` pub fn span_to_snippet(&self, sp: Span) -> Result self.span_to_source(sp, |src, start_index, end_index| src[start_index..end_index] .to_string()) self.span_to_source(sp, |src, start_index, end_index| src.get(start_index..end_index) .map(|s| s.to_string()) .ok_or_else(|| SpanSnippetError::IllFormedSpan(sp))) } pub fn span_to_margin(&self, sp: Span) -> Option self.span_to_source(sp, |src, start_index, _| src[..start_index].to_string()) self.span_to_source(sp, |src, start_index, _| src.get(..start_index) .map(|s| s.to_string()) .ok_or_else(|| SpanSnippetError::IllFormedSpan(sp))) } /// Extend the given `Span` to just after the previous occurrence of `c`. Return the same span", "commid": "rust_pr_63508"}], "negative_passages": []}
{"query_id": "q-en-rust-dd9d01e655fbfaa6c4210cf26f6453018b0127bf7f305f523aec0c305ef038ea", "query": "When I used the character in comment, combined with some (invalid) code preceding it, the compiler panicked. I tried this code: I expected to see this happen: rustc compiles this code. Instead, this happened: rustc panicked with the following output : Backtrace:\nRegressed in 1.34.0. Used to be:\ncc\nMiminized: Probably some span handcrafting?\nProbably some span handcrafting? +1\nI was idly curious, so I took a brief look at this. The diagnostic that triggers this was introduced in , but I don't think it is responsible for the error (it's just stepping through characters looking for a closing brace). There is some strange math in (introduced in ). It oscillates between zero-length and 1+ length spans as it is stepping over characters. When it hits a multi-byte character, it returns an invalid span. Example: Starting with a span of (0,1), using would result in (1,1), then (1,2), then (2,2), then (2,4). That last one slices into the middle of the multi-byte character (because of , where width is 3), causing a panic. I'm not familiar enough with this stuff to say how it should behave. It seems like is being used for two different purposes (getting the next zero-length span, and getting the \"next\" character), and it's not clear to me what the behavior should be since it is used in a variety of places. I don't think it can be changed to always return zero-length spans, and it can't be changed to always return non-zero length spans. Perhaps there should be two different functions?\ntriage: P-medium, removing nomination; the ICE to me appears to include enough information for a user to identify the offending character and work-around the problem (namely by getting rid of the character). I do agree with that there should probably be two distinct functions corresponding to the two distinct purposes that they identified for the current callers of .\nMore test cases have been brought up in\nI guess this is the same bug, but the examples I've seen here concern non-ASCII characters in comments. I'm getting a similar rustc panic from an erroneous code without comments: Compilation attempt:\nPlease file a new issue and reference this one.\nI think it's fixed, try .\nconfirmed fixed by .", "positive_passages": [{"docid": "doc-en-rust-59a357a72eccc3ef5006be3aed277088178cbcdb60900c8fde680428d7b54904", "text": " struct X {} fn main() { vec![X]; //\u2026 //~^ ERROR expected value, found struct `X` } ", "commid": "rust_pr_63508"}], "negative_passages": []}
{"query_id": "q-en-rust-dd9d01e655fbfaa6c4210cf26f6453018b0127bf7f305f523aec0c305ef038ea", "query": "When I used the character in comment, combined with some (invalid) code preceding it, the compiler panicked. I tried this code: I expected to see this happen: rustc compiles this code. Instead, this happened: rustc panicked with the following output : Backtrace:\nRegressed in 1.34.0. Used to be:\ncc\nMiminized: Probably some span handcrafting?\nProbably some span handcrafting? +1\nI was idly curious, so I took a brief look at this. The diagnostic that triggers this was introduced in , but I don't think it is responsible for the error (it's just stepping through characters looking for a closing brace). There is some strange math in (introduced in ). It oscillates between zero-length and 1+ length spans as it is stepping over characters. When it hits a multi-byte character, it returns an invalid span. Example: Starting with a span of (0,1), using would result in (1,1), then (1,2), then (2,2), then (2,4). That last one slices into the middle of the multi-byte character (because of , where width is 3), causing a panic. I'm not familiar enough with this stuff to say how it should behave. It seems like is being used for two different purposes (getting the next zero-length span, and getting the \"next\" character), and it's not clear to me what the behavior should be since it is used in a variety of places. I don't think it can be changed to always return zero-length spans, and it can't be changed to always return non-zero length spans. Perhaps there should be two different functions?\ntriage: P-medium, removing nomination; the ICE to me appears to include enough information for a user to identify the offending character and work-around the problem (namely by getting rid of the character). I do agree with that there should probably be two distinct functions corresponding to the two distinct purposes that they identified for the current callers of .\nMore test cases have been brought up in\nI guess this is the same bug, but the examples I've seen here concern non-ASCII characters in comments. I'm getting a similar rustc panic from an erroneous code without comments: Compilation attempt:\nPlease file a new issue and reference this one.\nI think it's fixed, try .\nconfirmed fixed by .", "positive_passages": [{"docid": "doc-en-rust-5302f342feb7fdda5a5536b1bf1f8ac201a6d12ce39f56555eee72599b28e57a", "text": " error[E0423]: expected value, found struct `X` --> $DIR/issue-61226.rs:3:10 | LL | vec![X]; //\u2026 | ^ did you mean `X { /* fields */ }`? error: aborting due to previous error For more information about this error, try `rustc --explain E0423`. ", "commid": "rust_pr_63508"}], "negative_passages": []}
{"query_id": "q-en-rust-5361409b1b92c074b238452d4faeabc0e6fcf44d53a3c9da0ee6c3bb9e9587d2", "query": "Creating a recursive type with infinite size by removing a leads to an internal compiler error. Might be the same issue as . To reproduce this bug, run on this code: Replace by and run again: I expected to see this happen: Instead, this happened: :\ntriage: P-high. Removing nomination.\nassigning to with expectation that they will delegate.\nnominating to try to find someone to investigate this.\nwell spotted ;) Because this issue has the tag and the other doesn't I will post the incremental test here:\ntriage: Reassigning to self and in hopes that one of us will find time to investigate, since it seems to be an issue with the dep-graph/incr-comp infrastructure.\ntriage: Downgrading to P-medium. I still want to resolve this, but it simply does not warrant revisiting every week during T-compiler triage.\nI encountered the same bug. I starting to create a new issue, but it's exactly the same than this one. In case it's needed, I uploaded the code on on the branch (there is two commits, the first is fine, the second generate the ICE) in case you want to validate your patch when you will eventually have time to fix it.\nI am running into this issue as well. Since the corresponding pr has been merged 24 days ago and the latest version was tagged 14 days ago, I am not sure, if this error should have been fixed with . (I did not find the commit on the master but I am also unsure if it was squashed with other commits.) Therefore, I thought I would at least post my error as well, just in case this is supposed to work. In my case I have an enum which holds a struct which in turn references that enum. I know that this can't work and I know how the correct way of handling this scenario is, I just wanted to let you know that this crashes the compiler. If you need a more detailed code sample, let me know and I will try to reproduce the error outside of my project.", "positive_passages": [{"docid": "doc-en-rust-ecafaef8f17adf0ba19d28c2f2701b1396e3f1aeb46e4dd789f25255097d68ac", "text": "return None } None => { if !tcx.sess.has_errors() { if !tcx.sess.has_errors_or_delayed_span_bugs() { bug!(\"try_mark_previous_green() - Forcing the DepNode should have set its color\") } else { // If the query we just forced has resulted // in some kind of compilation error, we // don't expect that the corresponding // dep-node color has been updated. // If the query we just forced has resulted in // some kind of compilation error, we cannot rely on // the dep-node color having been properly updated. // This means that the query system has reached an // invalid state. We let the compiler continue (by // returning `None`) so it can emit error messages // and wind down, but rely on the fact that this // invalid state will not be persisted to the // incremental compilation cache because of // compilation errors being present. debug!(\"try_mark_previous_green({:?}) - END - dependency {:?} resulted in compilation error\", dep_node, dep_dep_node); return None } } }", "commid": "rust_pr_66846"}], "negative_passages": []}
{"query_id": "q-en-rust-5361409b1b92c074b238452d4faeabc0e6fcf44d53a3c9da0ee6c3bb9e9587d2", "query": "Creating a recursive type with infinite size by removing a leads to an internal compiler error. Might be the same issue as . To reproduce this bug, run on this code: Replace by and run again: I expected to see this happen: Instead, this happened: :\ntriage: P-high. Removing nomination.\nassigning to with expectation that they will delegate.\nnominating to try to find someone to investigate this.\nwell spotted ;) Because this issue has the tag and the other doesn't I will post the incremental test here:\ntriage: Reassigning to self and in hopes that one of us will find time to investigate, since it seems to be an issue with the dep-graph/incr-comp infrastructure.\ntriage: Downgrading to P-medium. I still want to resolve this, but it simply does not warrant revisiting every week during T-compiler triage.\nI encountered the same bug. I starting to create a new issue, but it's exactly the same than this one. In case it's needed, I uploaded the code on on the branch (there is two commits, the first is fine, the second generate the ICE) in case you want to validate your patch when you will eventually have time to fix it.\nI am running into this issue as well. Since the corresponding pr has been merged 24 days ago and the latest version was tagged 14 days ago, I am not sure, if this error should have been fixed with . (I did not find the commit on the master but I am also unsure if it was squashed with other commits.) Therefore, I thought I would at least post my error as well, just in case this is supposed to work. In my case I have an enum which holds a struct which in turn references that enum. I know that this can't work and I know how the correct way of handling this scenario is, I just wanted to let you know that this crashes the compiler. If you need a more detailed code sample, let me know and I will try to reproduce the error outside of my project.", "positive_passages": [{"docid": "doc-en-rust-c030566be5a89720ea8f11f8e8a076420a81baf46227e2719d2ed716b0c55121", "text": " // revisions: rpass cfail enum A { //[cfail]~^ ERROR 3:1: 3:7: recursive type `A` has infinite size [E0072] B(C), } #[cfg(rpass)] struct C(Box); #[cfg(cfail)] struct C(A); //[cfail]~^ ERROR 12:1: 12:13: recursive type `C` has infinite size [E0072] fn main() {} ", "commid": "rust_pr_66846"}], "negative_passages": []}
{"query_id": "q-en-rust-2ac231aeb87028992127a7885dac127252c08f321f332692d893409e3ebbbca5", "query": "The code: It produced error: Although RFC not yet implemented, this code must work already. Or I'm wrong and this syntax does not supported by const-generics?\nNot sure where you see the ICE, removing that.\nThis is glitch after the previous 2 issues.. It is not ICE.\nTo Minimal code: produce error: The variant also give this error.\nNow it is ICE:\nIt's back to the original error.\nNow it produce this errors:\nThe behaviour is now as expected. We just need a test now.\nConst Generic RFC leads the code as possible: Is there another issue that is tracking it?\nYes, there are some subtle issues about making sure that constants in types are well-formed. The tracking issue for this is", "positive_passages": [{"docid": "doc-en-rust-e28e502e3060866e8a02973b53dc0aee3eb5b9ff468bac7de250f82ef75d410a", "text": " #![feature(const_generics)] //~^ WARN the feature `const_generics` is incomplete and may cause the compiler to crash pub struct MyArray The enforcement policies listed above apply to all official Rust venues; including official IRC channels (#rust, #rust-internals, #rust-tools, #rust-libs, #rustc, #rust-beginners, #rust-docs, #rust-community, #rust-lang, and #cargo); GitHub repositories under rust-lang, rust-lang-nursery, and rust-lang-deprecated; and all forums under rust-lang.org (users.rust-lang.org, internals.rust-lang.org). For other projects adopting the Rust Code of Conduct, please contact the maintainers of those projects for enforcement. If you wish to use this code of conduct for your own project, consider explicitly mentioning your moderation policy or making a copy with your own moderation policy so as to avoid confusion. The enforcement policies listed above apply to all official Rust venues; including all communication channels (Rust Discord server, Rust Zulip server); GitHub repositories under rust-lang, rust-lang-nursery, and rust-lang-deprecated; and all forums under rust-lang.org (users.rust-lang.org, internals.rust-lang.org). For other projects adopting the Rust Code of Conduct, please contact the maintainers of those projects for enforcement. If you wish to use this code of conduct for your own project, consider explicitly mentioning your moderation policy or making a copy with your own moderation policy so as to avoid confusion. *Adapted from the [Node.js Policy on Trolling](https://blog.izs.me/2012/08/policy-on-trolling) as well as the [Contributor Covenant v1.3.0](https://www.contributor-covenant.org/version/1/3/0/).*", "commid": "rust_pr_65004"}], "negative_passages": []}
{"query_id": "q-en-rust-b32c83ae9fbb19e1be040ea73663ff2d9451fbae31760395a82a970c5b42b633", "query": "There are still of IRC in the repo. Removing them is not a priority but it will avoid confusion for newcomers.\nShould these places be replaced with references to the rust discord server, or just be removed outright?\nProbably replaced. Gonna update title.\nOpened to address this.\nThis is horrible. Replacing IRC with Discord.. :/\nIf you haven't read about it yet, is the post from April.\nYeah I read already. I'd remove the mention of any particular discussion channel, instead of putting Discord in. I can't wait for Matrix to become more mature so the Rust project can use it. Falling back to Discord is quite bad for a FOSS project.\nIMO we should just mirror subreddit solution and point to Discord.", "positive_passages": [{"docid": "doc-en-rust-5099a583a358753ed5a465142995fb817b0d632efc334262abd0701583aac178", "text": "* [Helpful Links and Information](#helpful-links-and-information) If you have questions, please make a post on [internals.rust-lang.org][internals] or hop on the [Rust Discord server][rust-discord], [Rust Zulip server][rust-zulip] or [#rust-internals][pound-rust-internals]. hop on the [Rust Discord server][rust-discord] or [Rust Zulip server][rust-zulip]. As a reminder, all contributors are expected to follow our [Code of Conduct][coc].", "commid": "rust_pr_65004"}], "negative_passages": []}
{"query_id": "q-en-rust-b32c83ae9fbb19e1be040ea73663ff2d9451fbae31760395a82a970c5b42b633", "query": "There are still of IRC in the repo. Removing them is not a priority but it will avoid confusion for newcomers.\nShould these places be replaced with references to the rust discord server, or just be removed outright?\nProbably replaced. Gonna update title.\nOpened to address this.\nThis is horrible. Replacing IRC with Discord.. :/\nIf you haven't read about it yet, is the post from April.\nYeah I read already. I'd remove the mention of any particular discussion channel, instead of putting Discord in. I can't wait for Matrix to become more mature so the Rust project can use it. Falling back to Discord is quite bad for a FOSS project.\nIMO we should just mirror subreddit solution and point to Discord.", "positive_passages": [{"docid": "doc-en-rust-1804033c9d6636cf88be62663cde10cd093390163995fc52c50e43ca393c91b4", "text": "If this is your first time contributing, the [walkthrough] chapter of the guide can give you a good example of how a typical contribution would go. [pound-rust-internals]: https://chat.mibbit.com/?server=irc.mozilla.org&channel=%23rust-internals [internals]: https://internals.rust-lang.org [rust-discord]: http://discord.gg/rust-lang [rust-zulip]: https://rust-lang.zulipchat.com", "commid": "rust_pr_65004"}], "negative_passages": []}
{"query_id": "q-en-rust-b32c83ae9fbb19e1be040ea73663ff2d9451fbae31760395a82a970c5b42b633", "query": "There are still of IRC in the repo. Removing them is not a priority but it will avoid confusion for newcomers.\nShould these places be replaced with references to the rust discord server, or just be removed outright?\nProbably replaced. Gonna update title.\nOpened to address this.\nThis is horrible. Replacing IRC with Discord.. :/\nIf you haven't read about it yet, is the post from April.\nYeah I read already. I'd remove the mention of any particular discussion channel, instead of putting Discord in. I can't wait for Matrix to become more mature so the Rust project can use it. Falling back to Discord is quite bad for a FOSS project.\nIMO we should just mirror subreddit solution and point to Discord.", "positive_passages": [{"docid": "doc-en-rust-10b5ac686e8f802a26a38a02193928e840163b24298885787cb30d759f10ce73", "text": "There are a number of other ways to contribute to Rust that don't deal with this repository. Answer questions in [#rust][pound-rust], or on [users.rust-lang.org][users], Answer questions in the _Get Help!_ channels from the [Rust Discord server][rust-discord], on [users.rust-lang.org][users], or on [StackOverflow][so]. Participate in the [RFC process](https://github.com/rust-lang/rfcs).", "commid": "rust_pr_65004"}], "negative_passages": []}
{"query_id": "q-en-rust-b32c83ae9fbb19e1be040ea73663ff2d9451fbae31760395a82a970c5b42b633", "query": "There are still of IRC in the repo. Removing them is not a priority but it will avoid confusion for newcomers.\nShould these places be replaced with references to the rust discord server, or just be removed outright?\nProbably replaced. Gonna update title.\nOpened to address this.\nThis is horrible. Replacing IRC with Discord.. :/\nIf you haven't read about it yet, is the post from April.\nYeah I read already. I'd remove the mention of any particular discussion channel, instead of putting Discord in. I can't wait for Matrix to become more mature so the Rust project can use it. Falling back to Discord is quite bad for a FOSS project.\nIMO we should just mirror subreddit solution and point to Discord.", "positive_passages": [{"docid": "doc-en-rust-c41f56b6ad926145a2d84de5fb8f667bc818a3e5d8c8d229e2b9a018972a01f9", "text": "it to [Crates.io](http://crates.io). Easier said than done, but very, very valuable! [pound-rust]: http://chat.mibbit.com/?server=irc.mozilla.org&channel=%23rust [rust-discord]: https://discord.gg/rust-lang [users]: https://users.rust-lang.org/ [so]: http://stackoverflow.com/questions/tagged/rust [community-library]: https://github.com/rust-lang/rfcs/labels/A-community-library", "commid": "rust_pr_65004"}], "negative_passages": []}
{"query_id": "q-en-rust-b32c83ae9fbb19e1be040ea73663ff2d9451fbae31760395a82a970c5b42b633", "query": "There are still of IRC in the repo. Removing them is not a priority but it will avoid confusion for newcomers.\nShould these places be replaced with references to the rust discord server, or just be removed outright?\nProbably replaced. Gonna update title.\nOpened to address this.\nThis is horrible. Replacing IRC with Discord.. :/\nIf you haven't read about it yet, is the post from April.\nYeah I read already. I'd remove the mention of any particular discussion channel, instead of putting Discord in. I can't wait for Matrix to become more mature so the Rust project can use it. Falling back to Discord is quite bad for a FOSS project.\nIMO we should just mirror subreddit solution and point to Discord.", "positive_passages": [{"docid": "doc-en-rust-d0f0e6798d568287c567e0aaa98d279bbcd98e275bfa44fc1aada47bdd87488d", "text": "To contribute to Rust, please see [CONTRIBUTING](CONTRIBUTING.md). Rust has an [IRC] culture and most real-time collaboration happens in a variety of channels on Mozilla's IRC network, irc.mozilla.org. The most popular channel is [#rust], a venue for general discussion about Rust. And a good place to ask for help would be [#rust-beginners]. Most real-time collaboration happens in a variety of channels on the [Rust Discord server][rust-discord], with channels dedicated for getting help, community, documentation, and all major contribution areas in the Rust ecosystem. A good place to ask for help would be the #help channel. The [rustc guide] might be a good place to start if you want to find out how various parts of the compiler work. Also, you may find the [rustdocs for the compiler itself][rustdocs] useful. [IRC]: https://en.wikipedia.org/wiki/Internet_Relay_Chat [#rust]: irc://irc.mozilla.org/rust [#rust-beginners]: irc://irc.mozilla.org/rust-beginners [rust-discord]: https://discord.gg/rust-lang [rustc guide]: https://rust-lang.github.io/rustc-guide/about-this-guide.html [rustdocs]: https://doc.rust-lang.org/nightly/nightly-rustc/rustc/", "commid": "rust_pr_65004"}], "negative_passages": []}
{"query_id": "q-en-rust-b32c83ae9fbb19e1be040ea73663ff2d9451fbae31760395a82a970c5b42b633", "query": "There are still of IRC in the repo. Removing them is not a priority but it will avoid confusion for newcomers.\nShould these places be replaced with references to the rust discord server, or just be removed outright?\nProbably replaced. Gonna update title.\nOpened to address this.\nThis is horrible. Replacing IRC with Discord.. :/\nIf you haven't read about it yet, is the post from April.\nYeah I read already. I'd remove the mention of any particular discussion channel, instead of putting Discord in. I can't wait for Matrix to become more mature so the Rust project can use it. Falling back to Discord is quite bad for a FOSS project.\nIMO we should just mirror subreddit solution and point to Discord.", "positive_passages": [{"docid": "doc-en-rust-2a95e00d5737ea884f708f2a3bead5b091e6b9ee745c50506b31c2ac100ea214", "text": "`Config` struct. * Adding a sanity check? Take a look at `bootstrap/sanity.rs`. If you have any questions feel free to reach out on `#rust-infra` on IRC or ask on internals.rust-lang.org. When you encounter bugs, please file issues on the rust-lang/rust issue tracker. If you have any questions feel free to reach out on `#infra` channel in the [Rust Discord server][rust-discord] or ask on internals.rust-lang.org. When you encounter bugs, please file issues on the rust-lang/rust issue tracker. [rust-discord]: https://discord.gg/rust-lang ", "commid": "rust_pr_65004"}], "negative_passages": []}
{"query_id": "q-en-rust-38b1f00b97bd3f57f9cd795be236adb36d4d343650179826dae3e31fb8c7d204", "query": "Hello, this is your friendly neighborhood mergebot. After merging PR rust-lang/rust, I observed that the tool miri no longer builds. A follow-up PR to the repository is needed to fix the fallout. cc do you think you would have time to do the follow-up work? If so, that would be great! cc the PR reviewer, and -- nominating for prioritization.\nMiri is fixed upstream, but will be broken again by so I'd like to land that before updating.", "positive_passages": [{"docid": "doc-en-rust-96d1470c33090bd5bd857da6e31c33c2893b69f474ee44be571201eb02d450b9", "text": "[[package]] name = \"directories\" version = \"1.0.2\" version = \"2.0.1\" source = \"registry+https://github.com/rust-lang/crates.io-index\" dependencies = [ \"libc 0.2.54 (registry+https://github.com/rust-lang/crates.io-index)\", \"winapi 0.3.6 (registry+https://github.com/rust-lang/crates.io-index)\", \"cfg-if 0.1.8 (registry+https://github.com/rust-lang/crates.io-index)\", \"dirs-sys 0.3.3 (registry+https://github.com/rust-lang/crates.io-index)\", ] [[package]]", "commid": "rust_pr_61743"}], "negative_passages": []}
{"query_id": "q-en-rust-38b1f00b97bd3f57f9cd795be236adb36d4d343650179826dae3e31fb8c7d204", "query": "Hello, this is your friendly neighborhood mergebot. After merging PR rust-lang/rust, I observed that the tool miri no longer builds. A follow-up PR to the repository is needed to fix the fallout. cc do you think you would have time to do the follow-up work? If so, that would be great! cc the PR reviewer, and -- nominating for prioritization.\nMiri is fixed upstream, but will be broken again by so I'd like to land that before updating.", "positive_passages": [{"docid": "doc-en-rust-161a5b1df5f7dce910b5ad118d395316e0a3eed083e180cafc907bef8678f33b", "text": "] [[package]] name = \"dirs-sys\" version = \"0.3.3\" source = \"registry+https://github.com/rust-lang/crates.io-index\" dependencies = [ \"cfg-if 0.1.8 (registry+https://github.com/rust-lang/crates.io-index)\", \"libc 0.2.54 (registry+https://github.com/rust-lang/crates.io-index)\", \"redox_users 0.3.0 (registry+https://github.com/rust-lang/crates.io-index)\", \"winapi 0.3.6 (registry+https://github.com/rust-lang/crates.io-index)\", ] [[package]] name = \"dlmalloc\" version = \"0.1.3\" source = \"registry+https://github.com/rust-lang/crates.io-index\"", "commid": "rust_pr_61743"}], "negative_passages": []}
{"query_id": "q-en-rust-38b1f00b97bd3f57f9cd795be236adb36d4d343650179826dae3e31fb8c7d204", "query": "Hello, this is your friendly neighborhood mergebot. After merging PR rust-lang/rust, I observed that the tool miri no longer builds. A follow-up PR to the repository is needed to fix the fallout. cc do you think you would have time to do the follow-up work? If so, that would be great! cc the PR reviewer, and -- nominating for prioritization.\nMiri is fixed upstream, but will be broken again by so I'd like to land that before updating.", "positive_passages": [{"docid": "doc-en-rust-d9be40c5089d302fa0ac29a9afa976c3fba508518a68af810af36aa389b2b547", "text": "version = \"0.1.0\" dependencies = [ \"byteorder 1.2.7 (registry+https://github.com/rust-lang/crates.io-index)\", \"cargo_metadata 0.7.1 (registry+https://github.com/rust-lang/crates.io-index)\", \"cargo_metadata 0.8.0 (registry+https://github.com/rust-lang/crates.io-index)\", \"colored 1.6.0 (registry+https://github.com/rust-lang/crates.io-index)\", \"compiletest_rs 0.3.22 (registry+https://github.com/rust-lang/crates.io-index)\", \"directories 1.0.2 (registry+https://github.com/rust-lang/crates.io-index)\", \"directories 2.0.1 (registry+https://github.com/rust-lang/crates.io-index)\", \"env_logger 0.6.0 (registry+https://github.com/rust-lang/crates.io-index)\", \"hex 0.3.2 (registry+https://github.com/rust-lang/crates.io-index)\", \"log 0.4.6 (registry+https://github.com/rust-lang/crates.io-index)\",", "commid": "rust_pr_61743"}], "negative_passages": []}
{"query_id": "q-en-rust-38b1f00b97bd3f57f9cd795be236adb36d4d343650179826dae3e31fb8c7d204", "query": "Hello, this is your friendly neighborhood mergebot. After merging PR rust-lang/rust, I observed that the tool miri no longer builds. A follow-up PR to the repository is needed to fix the fallout. cc do you think you would have time to do the follow-up work? If so, that would be great! cc the PR reviewer, and -- nominating for prioritization.\nMiri is fixed upstream, but will be broken again by so I'd like to land that before updating.", "positive_passages": [{"docid": "doc-en-rust-aa553445cd4cc5b270fa029388525394c3e2ec78f39e2a72a2fae1a3fcbd518c", "text": "\"checksum diff 0.1.11 (registry+https://github.com/rust-lang/crates.io-index)\" = \"3c2b69f912779fbb121ceb775d74d51e915af17aaebc38d28a592843a2dd0a3a\" \"checksum difference 2.0.0 (registry+https://github.com/rust-lang/crates.io-index)\" = \"524cbf6897b527295dff137cec09ecf3a05f4fddffd7dfcd1585403449e74198\" \"checksum digest 0.7.6 (registry+https://github.com/rust-lang/crates.io-index)\" = \"03b072242a8cbaf9c145665af9d250c59af3b958f83ed6824e13533cf76d5b90\" \"checksum directories 1.0.2 (registry+https://github.com/rust-lang/crates.io-index)\" = \"72d337a64190607d4fcca2cb78982c5dd57f4916e19696b48a575fa746b6cb0f\" \"checksum directories 2.0.1 (registry+https://github.com/rust-lang/crates.io-index)\" = \"2ccc83e029c3cebb4c8155c644d34e3a070ccdb4ff90d369c74cd73f7cb3c984\" \"checksum dirs 1.0.5 (registry+https://github.com/rust-lang/crates.io-index)\" = \"3fd78930633bd1c6e35c4b42b1df7b0cbc6bc191146e512bb3bedf243fcc3901\" \"checksum dirs-sys 0.3.3 (registry+https://github.com/rust-lang/crates.io-index)\" = \"937756392ec77d1f2dd9dc3ac9d69867d109a2121479d72c364e42f4cab21e2d\" \"checksum dlmalloc 0.1.3 (registry+https://github.com/rust-lang/crates.io-index)\" = \"f283302e035e61c23f2b86b3093e8c6273a4c3125742d6087e96ade001ca5e63\" \"checksum either 1.5.0 (registry+https://github.com/rust-lang/crates.io-index)\" = \"3be565ca5c557d7f59e7cfcf1844f9e3033650c929c6566f511e8005f205c1d0\" \"checksum elasticlunr-rs 2.3.4 (registry+https://github.com/rust-lang/crates.io-index)\" = \"a99a310cd1f9770e7bf8e48810c7bcbb0e078c8fb23a8c7bcf0da4c2bf61a455\"", "commid": "rust_pr_61743"}], "negative_passages": []}
{"query_id": "q-en-rust-38b1f00b97bd3f57f9cd795be236adb36d4d343650179826dae3e31fb8c7d204", "query": "Hello, this is your friendly neighborhood mergebot. After merging PR rust-lang/rust, I observed that the tool miri no longer builds. A follow-up PR to the repository is needed to fix the fallout. cc do you think you would have time to do the follow-up work? If so, that would be great! cc the PR reviewer, and -- nominating for prioritization.\nMiri is fixed upstream, but will be broken again by so I'd like to land that before updating.", "positive_passages": [{"docid": "doc-en-rust-75ab21664fa191fa046ef74eb1f6134bf51bfa942f0a64756efd0f3375344bd0", "text": " Subproject commit e1a0f66373a1a185334a6e3be24e94161e3b4a43 Subproject commit 965160d4d7976ddead182b4a65b73f59818537de ", "commid": "rust_pr_61743"}], "negative_passages": []}
{"query_id": "q-en-rust-2f77a0644870908637b8614d82f677d41914073f0e8ef0fa37df3c0ba8775cda", "query": "Probably one should never write this, and I wonder if it will be disallowed in the future. Happens on stable, beta and nightly, in all const-forms existing (includes const generics) tcx.migrate_borrowck() tcx.migrate_borrowck() && this.borrow_set.location_map.contains_key(&location) } => { let bi = this.borrow_set.location_map[&location]; debug!(", "commid": "rust_pr_61947"}], "negative_passages": []}
{"query_id": "q-en-rust-2179ca3c521bd7ab9ffc57d9e8e8418f9ade43db5b06d66a6b5e5a7b2daf2d40", "query": "I compiled the following code expecting some sort of error message. Instead, the compiler panicked unexpectedly. () errors: errors: :\nTriage: Likely dupe of cc Preliminarily assigning P-high before compiler team meeting.", "positive_passages": [{"docid": "doc-en-rust-84b7391ba619e3c7cfca1391d25670ab023d85112f2af3a359944738a27be20f", "text": " fn f1<'a>(_: &'a mut ()) {} fn f2 self.sess.buffer_lint_with_diagnostic( builtin::BARE_TRAIT_OBJECTS, id, span, \"trait objects without an explicit `dyn` are deprecated\", builtin::BuiltinLintDiagnostics::BareTraitObject(span, is_global), ) // FIXME(davidtwco): This is a hack to detect macros which produce spans of the // call site which do not have a macro backtrace. See #61963. let is_macro_callsite = self.sess.source_map() .span_to_snippet(span) .map(|snippet| snippet.starts_with(\"#[\")) .unwrap_or(true); if !is_macro_callsite { self.sess.buffer_lint_with_diagnostic( builtin::BARE_TRAIT_OBJECTS, id, span, \"trait objects without an explicit `dyn` are deprecated\", builtin::BuiltinLintDiagnostics::BareTraitObject(span, is_global), ) } } fn wrap_in_try_constructor(", "commid": "rust_pr_63014"}], "negative_passages": []}
{"query_id": "q-en-rust-20f696d3f868e72175817ce3cf0d73a0e5839e022980e8eebb0240818ac31c42", "query": "One of the many new (or newly enabled by default) warnings in Servo in today\u2019s Nightly: The suggestion is wrong, which means also fails and therefore does not apply fixes for (the many) other warnings where the suggestion is correct.\nTriage: Preliminarily assigning P-high before Felix reviews this.\nAlso cc since iirc you did some macro related work here.\nPossible solution: when a warning\u2019s span is exactly a macro invocation, don\u2019t emit a \u201csuggested fix\u201d that is likely wrong. (But still emit a warning.)\nWe have that check for most suggestions, but it might be missing for . Was this in latest nightly? I know a couple of these have been fixed in the past few weeks.\nThis is in rustc 1.37.0-nightly ( 2019-06-18). works around this and allows \u2019ing the rest of the warnings (namely old syntax for inclusive ranges) or warnings in other crates.\nOh. I just noticed the suggestion. I\u2019d missed it in that blob of text.\nApologies for the time this has taken - I've struggled to find time to dig in to this. So far, I've managed to make a smaller example (it still depends on , and ) that reproduces the issue, you can find it in (run in the directory).\nThanks to some help from I've now got a minimal test case w/out any dependencies that outputs the same tokens that the reduced example from servo did (you can see it in ). When inspecting the span that the lint is emitted for, it isn't marked as coming from a macro expansion, and it appears that it was created that way during the macro expansion. I'd appreciate any opinions on what the correct fix is here - is it a bug in / that they output what appears to be incorrect spans? Should rustc be able to handle this case? If so, where should I be looking to mark the span or what check should I be performing to identify it as a result of macro expansion?\ncc\nI believe this is a compiler bug, not something that needs a fix in syn or quote. is a call_site span. It is the span of tokens in macro output that are not attributable to any particular input token. Usually that will be most of the tokens in any macro output; diagnostics need to take this into account.", "positive_passages": [{"docid": "doc-en-rust-ccacac1d35e5af7ba5f1796a2c1d0e16c166b5a03afbe8582919e39a089ffc49", "text": " // force-host // no-prefer-dynamic #![crate_type = \"proc-macro\"] extern crate proc_macro; use proc_macro::{Group, TokenStream, TokenTree}; // This macro exists as part of a reproduction of #61963 but without using quote/syn/proc_macro2. #[proc_macro_derive(DomObject)] pub fn expand_token_stream(input: TokenStream) -> TokenStream { // Construct a dummy span - `#0 bytes(0..0)` - which is present in the input because // of the specially crafted generated tokens in the `attribute-crate` proc-macro. let dummy_span = input.clone().into_iter().nth(0).unwrap().span(); // Define what the macro would output if constructed properly from the source using syn/quote. let output: TokenStream = \"impl Bar for ((), Qux let msg = \"A non-empty glob must import something with the glob's visibility\"; self.r.session.span_err(directive.span, msg); let msg = \"glob import doesn't reexport anything because no candidate is public enough\"; self.r.session.buffer_lint(UNUSED_IMPORTS, directive.id, directive.span, msg); } return None; }", "commid": "rust_pr_65539"}], "negative_passages": []}
{"query_id": "q-en-rust-3cdad1f570797de323d8525a97742031c05e32d5f630caebc78751aa1c1c0ca8", "query": "Consider the following three examples: A and C are accepted and B has a compilation error. However, the message reported in B looks more like a lint than a compilation error. A consequence of this error is that adding a non-public function to a module (e.g. the in B) may break code that imports from that module. This causes surprises when refactoring. Shouldn't \"A non-empty glob must import something with the glob's visibility\" be a lint? The original discussion is here:\nThe error was introduced in as a part of import modularization, and discussed in one of the related issues, but I can't find where exactly. The error was introduced by analogy with errors for single imports (and just to be conservative): The error is not technically necessary, and we should be able to report it as a lint for glob imports while keeping it an error for single imports.\nWould this be a \"good first issue\" or is it complex to fix?\nNo, not complex. Find where \"non-empty glob must import something\" is reported. Replace with .\nI want to do this issue but the problem I have is with a test (ui/imports/reexports) that checks for this error, I don't know the compiler structure so I don't know when compiling this file the lint will not be reported because of the other errors or not. Else I changed the to", "positive_passages": [{"docid": "doc-en-rust-e100dd78abc083f99e66cd3ac0c9591f2b0b96176704402301cfab74c05b5dd1", "text": " #![warn(unused_imports)] mod a { fn foo() {} mod foo {} mod a { pub use super::foo; //~ ERROR cannot be re-exported pub use super::*; //~ ERROR must import something with the glob's visibility pub use super::*; //~^ WARNING glob import doesn't reexport anything because no candidate is public enough } } mod b { pub fn foo() {} mod foo { pub struct S; } mod foo { pub struct S; } pub mod a { pub use super::foo; // This is OK since the value `foo` is visible enough.", "commid": "rust_pr_65539"}], "negative_passages": []}
{"query_id": "q-en-rust-3cdad1f570797de323d8525a97742031c05e32d5f630caebc78751aa1c1c0ca8", "query": "Consider the following three examples: A and C are accepted and B has a compilation error. However, the message reported in B looks more like a lint than a compilation error. A consequence of this error is that adding a non-public function to a module (e.g. the in B) may break code that imports from that module. This causes surprises when refactoring. Shouldn't \"A non-empty glob must import something with the glob's visibility\" be a lint? The original discussion is here:\nThe error was introduced in as a part of import modularization, and discussed in one of the related issues, but I can't find where exactly. The error was introduced by analogy with errors for single imports (and just to be conservative): The error is not technically necessary, and we should be able to report it as a lint for glob imports while keeping it an error for single imports.\nWould this be a \"good first issue\" or is it complex to fix?\nNo, not complex. Find where \"non-empty glob must import something\" is reported. Replace with .\nI want to do this issue but the problem I have is with a test (ui/imports/reexports) that checks for this error, I don't know the compiler structure so I don't know when compiling this file the lint will not be reported because of the other errors or not. Else I changed the to", "positive_passages": [{"docid": "doc-en-rust-e81bc549430bcc4da8e18c8aa6bb033c4c640f47e1e25f50f8a0eb25392cc441", "text": "error[E0364]: `foo` is private, and cannot be re-exported --> $DIR/reexports.rs:6:17 --> $DIR/reexports.rs:8:17 | LL | pub use super::foo; | ^^^^^^^^^^ | note: consider marking `foo` as `pub` in the imported module --> $DIR/reexports.rs:6:17 --> $DIR/reexports.rs:8:17 | LL | pub use super::foo; | ^^^^^^^^^^ error: A non-empty glob must import something with the glob's visibility --> $DIR/reexports.rs:7:17 | LL | pub use super::*; | ^^^^^^^^ error[E0603]: module `foo` is private --> $DIR/reexports.rs:28:15 --> $DIR/reexports.rs:33:15 | LL | use b::a::foo::S; | ^^^ error[E0603]: module `foo` is private --> $DIR/reexports.rs:29:15 --> $DIR/reexports.rs:34:15 | LL | use b::b::foo::S as T; | ^^^ error: aborting due to 4 previous errors warning: glob import doesn't reexport anything because no candidate is public enough --> $DIR/reexports.rs:9:17 | LL | pub use super::*; | ^^^^^^^^ | note: lint level defined here --> $DIR/reexports.rs:1:9 | LL | #![warn(unused_imports)] | ^^^^^^^^^^^^^^ error: aborting due to 3 previous errors Some errors have detailed explanations: E0364, E0603. For more information about an error, try `rustc --explain E0364`.", "commid": "rust_pr_65539"}], "negative_passages": []}
{"query_id": "q-en-rust-bf57e7b18ed8b9ab5e27682bd209117a0b094e52120110102ca62034de171272", "query": "The docs currently claim that the minimum Linux kernel version for is 2.6.18 (released September 20th, 2006). This is because RHEL 5 used that kernel version. However, RHEL 5 entered ELS on March 31, 2017. Should we continue to support RHEL 5 for , or should we increase the minimum Linux Kernel version to 2.6.27 (2nd LTS) or 2.6.32 (RHEL 6, 3rd LTS)? . Even bumping the min-version to 2.6.27 would allow us to remove most of the Linux-specific hacks in . Example: .\nGiven that RHEL is the only reason we keep comically old kernel versions around, I would propose that Rust only support RHEL until Maintenance Support ends. This is (essentially) what we already did for RHEL 4. Rust never supported RHEL 4, and back when RHEL 5 still had maintenance support. It would be nice to get someone from Red Hat or a RHEL customer to comment. This policy would allow us to increment the minimum kernel from 2.6.18 to 2.6.32 (and remove a lot of Linux-specific hacks). Note that RHEL has a weird (and very long) support system for RHEL 4, 5, and 6 (sources: and ). It has 5 major steps: Full Support Normal support of the OS Maintenance Support 1 Bug/Security fixes Limited hardware refresh Maintenance Support 2 Bug/Security fixes The end of this phase is considered \"Product retirement\" Extended Life Cycle Support (ELS) Additional paid product from Red Hat Gives updates for a longer period of time No additional releases/images Extended Life Phase (ELP) No more updates Limited Technical Support End date not given by Red Hat Current status of RHEL versions: RHEL 4 Not supported by Rust Currently in ELP (no end date specified) ELS ended March 31, 2017 RHEL 5 Supported by Rust Currently in ELS Maintenance Support ended March 31, 2017 ELS ends November 30, 2020 RHEL 6 Supported by Rust Currently in Maintenance Support 2 Maintenance Support ends November 30, 2020\ncc which I think they could be related.\nIt also may be worth to drop support of Windows XP and Vista (especially considering that panics are broken on XP since June 2016, see: ). Previously it was discussed . cc\nBesides , which other things would we be able to clean up?\nGood question! There are 6 workarounds in for older Linux versions (that I could find). Increasing the minimum version to 2.6.32 (aka 3rd Kernel LTS, aka RHEL 6) would fix 5 of them. Code links are inline: (mentioned above, in 2.6.23) ( in 2.6.24, there is also the mention of a bug occuring on \"some linux kernel at some point\") to atomically set the flag on the pipe fds ( in 2.6.27) ( in 2.6.27) to permit use of ( in 2.6.28) ( in 4.5, not fixed by this proposal) As you can see, the workarounds fixed by this proposal all have a similar flavor.\nI am Red Hat's maintainer for Rust on RHEL -- thanks for the cc. I try to keep an eye on new issues, but this one slipped past me. Red Hat only ships the Rust toolchain to customers for RHEL 7 and RHEL 8. If our customers would like to use Rust on older RHEL, they can do so via , and we'll support them in the same way we would for any other third-party software. Internally we do also build and use on RHEL 6, mostly because it's needed to ship Firefox updates. This is where it gets a little hairy, because each RHEL N is bootstrapped on RHEL N-1 and often kept that way -- meaning a lot of our RHEL 6 builders are still running on a RHEL 5 kernel. I would have to apply local workarounds if upstream Rust loses the ability to run on 2.6.18 kernels. We prefer to keep fixes upstream as much as possible, both for open sharing and to avoid bitrot. Is there much of a benefit to removing the current workarounds? I agree all of that code is annoying and cumbersome, but it's already written, and as far as I know it hasn't been a maintenance burden. Do you see otherwise? If there are any known issues with such Linux compatibility code, I am definitely willing to take assignment for fixing them.\n(Not a Rust maintainer, but I'm a Rust user and maintain a lot of other OSS software so I have well-developed feelings around supporting Very Old Software :-)) Do you have any sense of on what timeline Rust would be able to drop support for 2.6.18 kernels without causing you pain? In general I don't think people mind supporting configurations that have users and are painful to work around, but needing to support them for forever is a bitter pill to swallow! Particularly as they get harder and harder to test over time (already I have no idea how to test on really old kernels besides building it myself). So if there was an estimate \"we'd like to be able to support this until Q2 2020\", even if it's not set in stone, I think that would be very helpful!\nThe other benefit would be that Rust wouldn't have to use CentOS 5 for CI, which means we don't have to patch LLVM (and Emscripten LLVM) to compile on those systems. Of course that's also fairly limited in scope.\nAs soon as we stop shipping Firefox and Thunderbird updates on RHEL 6, I won't need Rust there anymore. AFAIK this does correspond to the end of Maintenance Support, November 30, 2020. Then I'll be on to RHEL 7 builders as my minimum, probably still with some RHEL 6 2.6.32 kernels involved. It should be fine for me if we update CI to CentOS 6. This is mostly concerned with how the release binaries are linked to GLIBC symbol versions, which is all in userspace. It's a broader community question whether any other Rust users still care about running on RHEL or CentOS 5. (Small caveat - glibc support for a symbol can still return , .)\nI noticed that Red Hat also has Extended Life Cycle Support (ELS) for RHEL 6 until June 30, 2024. Will you need RHEL 5 to work during this time? I don't know how ELS works with Firefox updates. Also, is there any reason RHEL 4 support wasn't an issue prior to March 31, 2017 (while RHEL 5 was still normally supported)? This issue came up for me when dealing with opening files in , see for more info. No single RHEL 5 issue is that bad, it's mainly just the sum of a bunch of tiny issues.\nI'm not on that team, but AFAIK we don't ship Firefox updates for ELS. The last build I see for RHEL 5 was . Maybe there could be an exception for a severe security issue, but I really doubt we'd rebase to newer Firefox for that, which means Rust requirements won't change. New Firefox ESR versions do require a newer Rust toolchain too, which is generally why we have to keep up. Otherwise we could just freeze some older compat rustc while upstream moves on. Rust wasn't required until Firefox 52.\nthat makes perfect sense to me, thanks for clarifying. So the proposed policy would be: Support RHEL until RHEL is retired (i.e. ends normal support). This would mean: Supporting RHEL 5 (v2.6.18) until November 30, 2020 Supporting RHEL 6 (v2.6.32) until June 30, 2024 Supporting RHEL 7 (v3.10) until May, 2029 This is a much longer support tail than any other Linux distros (that I know of), so it would also be the effective minimum kernel version of , , and their dependencies. How does that sound to people? EDIT: The alternative to this would be Support RHEL until it is retired, taking the table above and incrementing the RHEL versions by 1. It would also mean being able to drop RHEL 5 support now.\nSure, that's ideal for me. :smile: As mentioned, is just needed for kernel support, as far as I'm concerned. Userspace concerns like GLIBC symbol versions can stick to currently supported for my needs.\nRed Hat should backport the features we need (CLOEXEC and getrandom, in particular) to whatever kernels that it wants Rust to support. I don't know any good reason why they don't do so, other than it's cheaper to convince the whole world to keep supporting older kernel versions than it is to do the backports. We should change that dynamic.\nThe alternative to this would be Support RHEL N until it is retired I think we should not officially support retired OS versions (well, maybe with some grace period), so I prefer this option. Supporting 14 year old kernels seems a bit too much to me. each RHEL N is bootstrapped on RHEL N-1 and often kept that way -- meaning a lot of our RHEL 6 builders are still running on a RHEL 5 kernel Is it possible to apply kernel patches to those builders to add support for functionality like CLOEXEC? Or build Firefox separately on RHEL 6?\nI think it won't be fruitful for us to debate the merits of stable enterprise kernels, but no, this is not a possibility. An alternative take is \"Red Hat should be responsible for the work in Rust to maintain the support they want\" -- here I am, ready and willing. I definitely can't change those kernels. The ones that are stuck on N-1 are precisely to avoid rocking the boat, and backporting features is a big change. Some of our arches do eventually update the builders to their matching N kernel, but some don't, and I don't know all the reasons. If the workarounds are removed, then I will have to reapply them in our own builds. This assumes I keep using only our own binaries -- if I ever have to re-bootstrap from upstream binaries for stage0, I would also have to use some interception (like ) to hack in the workarounds. This is all possible, but much worse than the status quo of maintaining support here.\nI think this sounds reasonable. We aren't going to be able to update these kernels, it's just a question of when we would drop support for these OSes. Especially given that we wont need RHEL 5 CI support, I think that leaving things as they are is fine. I opened this issue to verify two things: 1) That we weren't intending on supporting RHEL 5 \"forever\", and had a clear date when to drop support. 2) That someone from Red Hat actively cared about supporting these older kernels. It looks like both these things are true. We should remove RHEL 5 workarounds after November 30, 2020. This issue can be postponed until then. EDIT: For our use case in , we were able to add compatibility for RHEL 5 .\nSee for a recent case where supporting such ancient kernels required extra manual work ( to the 2 lines mentioned by above). I am not sure how often that comes up, but probably the hardest part is to even notice that it happens -- and even then those codepaths will likely linger untested.\nYes, but this will still be true in general. If the CI kernel is newer than whatever kernel baseline we choose, it will be possible for newer syscalls to pass undetected. I don't know if we have any control over that -- can we choose a particular VM image? So that leaves us with a stated policy of support. The more that developers are aware of such issues, the better, but I'm the one most likely to notice when my build actually fails. If I come back with a fix, I at least need the project to be receptive. I haven't been in the habit of building nightly on our RHEL6 builders, but maybe I should! Catching this early is better for everyone involved...\nThat seems reasonable. How quick is the Rust related bootstrapping? Could it be run automatically (or more frequently)? This is an interesting point. If I hadn't made that change, would have kept working on RHEL 5, it just would have leaked an FD. I'm not sure how we would even test for this sort of thing. This isn't a RHEL 5 specific concern, I just don't know how Rust tests these types of resource leak issues in general.\nAbout 2 hours. Automation is harder, since I believe our build system needs my credentials, but I'll look into what is possible here. Yeah, that's a lot more subtle than an !", "positive_passages": [{"docid": "doc-en-rust-86704df67fbee7a75598b4ec01e5e5e9edae516161ee71f94318a14fe2ec230c", "text": "COPY host-x86_64/dist-x86_64-linux/build-binutils.sh /tmp/ RUN ./build-binutils.sh # libssh2 (a dependency of Cargo) requires cmake 2.8.11 or higher but CentOS # only has 2.6.4, so build our own COPY host-x86_64/dist-x86_64-linux/build-cmake.sh /tmp/ RUN ./build-cmake.sh # Need a newer version of gcc than centos has to compile LLVM nowadays # Need at least GCC 5.1 to compile LLVM nowadays COPY host-x86_64/dist-x86_64-linux/build-gcc.sh /tmp/ RUN ./build-gcc.sh RUN ./build-gcc.sh && apt-get remove -y gcc g++ # CentOS 5.5 has Python 2.4 by default, but LLVM needs 2.7+ # Debian 6 has Python 2.6 by default, but LLVM needs 2.7+ COPY host-x86_64/dist-x86_64-linux/build-python.sh /tmp/ RUN ./build-python.sh # Now build LLVM+Clang 7, afterwards configuring further compilations to use the # LLVM needs cmake 3.4.3 or higher, and is planning to raise to 3.13.4. COPY host-x86_64/dist-x86_64-linux/build-cmake.sh /tmp/ RUN ./build-cmake.sh # Now build LLVM+Clang, afterwards configuring further compilations to use the # clang/clang++ compilers. COPY host-x86_64/dist-x86_64-linux/build-clang.sh host-x86_64/dist-x86_64-linux/llvm-project-centos.patch /tmp/ COPY host-x86_64/dist-x86_64-linux/build-clang.sh /tmp/ RUN ./build-clang.sh ENV CC=clang CXX=clang++ # Apparently CentOS 5.5 desn't have `git` in yum, but we're gonna need it for # cloning, so download and build it here. COPY host-x86_64/dist-x86_64-linux/build-git.sh /tmp/ RUN ./build-git.sh # for sanitizers, we need kernel headers files newer than the ones CentOS ships # with so we install newer ones here COPY host-x86_64/dist-x86_64-linux/build-headers.sh /tmp/ RUN ./build-headers.sh # OpenSSL requires a more recent version of perl # with so we install newer ones here COPY host-x86_64/dist-x86_64-linux/build-perl.sh /tmp/ RUN ./build-perl.sh COPY scripts/sccache.sh /scripts/ RUN sh /scripts/sccache.sh", "commid": "rust_pr_74163"}], "negative_passages": []}
{"query_id": "q-en-rust-bf57e7b18ed8b9ab5e27682bd209117a0b094e52120110102ca62034de171272", "query": "The docs currently claim that the minimum Linux kernel version for is 2.6.18 (released September 20th, 2006). This is because RHEL 5 used that kernel version. However, RHEL 5 entered ELS on March 31, 2017. Should we continue to support RHEL 5 for , or should we increase the minimum Linux Kernel version to 2.6.27 (2nd LTS) or 2.6.32 (RHEL 6, 3rd LTS)? . Even bumping the min-version to 2.6.27 would allow us to remove most of the Linux-specific hacks in . Example: .\nGiven that RHEL is the only reason we keep comically old kernel versions around, I would propose that Rust only support RHEL until Maintenance Support ends. This is (essentially) what we already did for RHEL 4. Rust never supported RHEL 4, and back when RHEL 5 still had maintenance support. It would be nice to get someone from Red Hat or a RHEL customer to comment. This policy would allow us to increment the minimum kernel from 2.6.18 to 2.6.32 (and remove a lot of Linux-specific hacks). Note that RHEL has a weird (and very long) support system for RHEL 4, 5, and 6 (sources: and ). It has 5 major steps: Full Support Normal support of the OS Maintenance Support 1 Bug/Security fixes Limited hardware refresh Maintenance Support 2 Bug/Security fixes The end of this phase is considered \"Product retirement\" Extended Life Cycle Support (ELS) Additional paid product from Red Hat Gives updates for a longer period of time No additional releases/images Extended Life Phase (ELP) No more updates Limited Technical Support End date not given by Red Hat Current status of RHEL versions: RHEL 4 Not supported by Rust Currently in ELP (no end date specified) ELS ended March 31, 2017 RHEL 5 Supported by Rust Currently in ELS Maintenance Support ended March 31, 2017 ELS ends November 30, 2020 RHEL 6 Supported by Rust Currently in Maintenance Support 2 Maintenance Support ends November 30, 2020\ncc which I think they could be related.\nIt also may be worth to drop support of Windows XP and Vista (especially considering that panics are broken on XP since June 2016, see: ). Previously it was discussed . cc\nBesides , which other things would we be able to clean up?\nGood question! There are 6 workarounds in for older Linux versions (that I could find). Increasing the minimum version to 2.6.32 (aka 3rd Kernel LTS, aka RHEL 6) would fix 5 of them. Code links are inline: (mentioned above, in 2.6.23) ( in 2.6.24, there is also the mention of a bug occuring on \"some linux kernel at some point\") to atomically set the flag on the pipe fds ( in 2.6.27) ( in 2.6.27) to permit use of ( in 2.6.28) ( in 4.5, not fixed by this proposal) As you can see, the workarounds fixed by this proposal all have a similar flavor.\nI am Red Hat's maintainer for Rust on RHEL -- thanks for the cc. I try to keep an eye on new issues, but this one slipped past me. Red Hat only ships the Rust toolchain to customers for RHEL 7 and RHEL 8. If our customers would like to use Rust on older RHEL, they can do so via , and we'll support them in the same way we would for any other third-party software. Internally we do also build and use on RHEL 6, mostly because it's needed to ship Firefox updates. This is where it gets a little hairy, because each RHEL N is bootstrapped on RHEL N-1 and often kept that way -- meaning a lot of our RHEL 6 builders are still running on a RHEL 5 kernel. I would have to apply local workarounds if upstream Rust loses the ability to run on 2.6.18 kernels. We prefer to keep fixes upstream as much as possible, both for open sharing and to avoid bitrot. Is there much of a benefit to removing the current workarounds? I agree all of that code is annoying and cumbersome, but it's already written, and as far as I know it hasn't been a maintenance burden. Do you see otherwise? If there are any known issues with such Linux compatibility code, I am definitely willing to take assignment for fixing them.\n(Not a Rust maintainer, but I'm a Rust user and maintain a lot of other OSS software so I have well-developed feelings around supporting Very Old Software :-)) Do you have any sense of on what timeline Rust would be able to drop support for 2.6.18 kernels without causing you pain? In general I don't think people mind supporting configurations that have users and are painful to work around, but needing to support them for forever is a bitter pill to swallow! Particularly as they get harder and harder to test over time (already I have no idea how to test on really old kernels besides building it myself). So if there was an estimate \"we'd like to be able to support this until Q2 2020\", even if it's not set in stone, I think that would be very helpful!\nThe other benefit would be that Rust wouldn't have to use CentOS 5 for CI, which means we don't have to patch LLVM (and Emscripten LLVM) to compile on those systems. Of course that's also fairly limited in scope.\nAs soon as we stop shipping Firefox and Thunderbird updates on RHEL 6, I won't need Rust there anymore. AFAIK this does correspond to the end of Maintenance Support, November 30, 2020. Then I'll be on to RHEL 7 builders as my minimum, probably still with some RHEL 6 2.6.32 kernels involved. It should be fine for me if we update CI to CentOS 6. This is mostly concerned with how the release binaries are linked to GLIBC symbol versions, which is all in userspace. It's a broader community question whether any other Rust users still care about running on RHEL or CentOS 5. (Small caveat - glibc support for a symbol can still return , .)\nI noticed that Red Hat also has Extended Life Cycle Support (ELS) for RHEL 6 until June 30, 2024. Will you need RHEL 5 to work during this time? I don't know how ELS works with Firefox updates. Also, is there any reason RHEL 4 support wasn't an issue prior to March 31, 2017 (while RHEL 5 was still normally supported)? This issue came up for me when dealing with opening files in , see for more info. No single RHEL 5 issue is that bad, it's mainly just the sum of a bunch of tiny issues.\nI'm not on that team, but AFAIK we don't ship Firefox updates for ELS. The last build I see for RHEL 5 was . Maybe there could be an exception for a severe security issue, but I really doubt we'd rebase to newer Firefox for that, which means Rust requirements won't change. New Firefox ESR versions do require a newer Rust toolchain too, which is generally why we have to keep up. Otherwise we could just freeze some older compat rustc while upstream moves on. Rust wasn't required until Firefox 52.\nthat makes perfect sense to me, thanks for clarifying. So the proposed policy would be: Support RHEL until RHEL is retired (i.e. ends normal support). This would mean: Supporting RHEL 5 (v2.6.18) until November 30, 2020 Supporting RHEL 6 (v2.6.32) until June 30, 2024 Supporting RHEL 7 (v3.10) until May, 2029 This is a much longer support tail than any other Linux distros (that I know of), so it would also be the effective minimum kernel version of , , and their dependencies. How does that sound to people? EDIT: The alternative to this would be Support RHEL until it is retired, taking the table above and incrementing the RHEL versions by 1. It would also mean being able to drop RHEL 5 support now.\nSure, that's ideal for me. :smile: As mentioned, is just needed for kernel support, as far as I'm concerned. Userspace concerns like GLIBC symbol versions can stick to currently supported for my needs.\nRed Hat should backport the features we need (CLOEXEC and getrandom, in particular) to whatever kernels that it wants Rust to support. I don't know any good reason why they don't do so, other than it's cheaper to convince the whole world to keep supporting older kernel versions than it is to do the backports. We should change that dynamic.\nThe alternative to this would be Support RHEL N until it is retired I think we should not officially support retired OS versions (well, maybe with some grace period), so I prefer this option. Supporting 14 year old kernels seems a bit too much to me. each RHEL N is bootstrapped on RHEL N-1 and often kept that way -- meaning a lot of our RHEL 6 builders are still running on a RHEL 5 kernel Is it possible to apply kernel patches to those builders to add support for functionality like CLOEXEC? Or build Firefox separately on RHEL 6?\nI think it won't be fruitful for us to debate the merits of stable enterprise kernels, but no, this is not a possibility. An alternative take is \"Red Hat should be responsible for the work in Rust to maintain the support they want\" -- here I am, ready and willing. I definitely can't change those kernels. The ones that are stuck on N-1 are precisely to avoid rocking the boat, and backporting features is a big change. Some of our arches do eventually update the builders to their matching N kernel, but some don't, and I don't know all the reasons. If the workarounds are removed, then I will have to reapply them in our own builds. This assumes I keep using only our own binaries -- if I ever have to re-bootstrap from upstream binaries for stage0, I would also have to use some interception (like ) to hack in the workarounds. This is all possible, but much worse than the status quo of maintaining support here.\nI think this sounds reasonable. We aren't going to be able to update these kernels, it's just a question of when we would drop support for these OSes. Especially given that we wont need RHEL 5 CI support, I think that leaving things as they are is fine. I opened this issue to verify two things: 1) That we weren't intending on supporting RHEL 5 \"forever\", and had a clear date when to drop support. 2) That someone from Red Hat actively cared about supporting these older kernels. It looks like both these things are true. We should remove RHEL 5 workarounds after November 30, 2020. This issue can be postponed until then. EDIT: For our use case in , we were able to add compatibility for RHEL 5 .\nSee for a recent case where supporting such ancient kernels required extra manual work ( to the 2 lines mentioned by above). I am not sure how often that comes up, but probably the hardest part is to even notice that it happens -- and even then those codepaths will likely linger untested.\nYes, but this will still be true in general. If the CI kernel is newer than whatever kernel baseline we choose, it will be possible for newer syscalls to pass undetected. I don't know if we have any control over that -- can we choose a particular VM image? So that leaves us with a stated policy of support. The more that developers are aware of such issues, the better, but I'm the one most likely to notice when my build actually fails. If I come back with a fix, I at least need the project to be receptive. I haven't been in the habit of building nightly on our RHEL6 builders, but maybe I should! Catching this early is better for everyone involved...\nThat seems reasonable. How quick is the Rust related bootstrapping? Could it be run automatically (or more frequently)? This is an interesting point. If I hadn't made that change, would have kept working on RHEL 5, it just would have leaked an FD. I'm not sure how we would even test for this sort of thing. This isn't a RHEL 5 specific concern, I just don't know how Rust tests these types of resource leak issues in general.\nAbout 2 hours. Automation is harder, since I believe our build system needs my credentials, but I'll look into what is possible here. Yeah, that's a lot more subtle than an !", "positive_passages": [{"docid": "doc-en-rust-eb323ced085deef56200e4059a069ecb7dc0dfc2336e150f8ea1bbe7ac805c46", "text": "# libcurl, instead it should compile its own. ENV LIBCURL_NO_PKG_CONFIG 1 # There was a bad interaction between \"old\" 32-bit binaries on current 64-bit # kernels with selinux enabled, where ASLR mmap would sometimes choose a low # address and then block it for being below `vm.mmap_min_addr` -> `EACCES`. # This is probably a kernel bug, but setting `ulimit -Hs` works around it. # See also `src/ci/run.sh` where this takes effect. ENV SET_HARD_RLIMIT_STACK 1 ENV DIST_REQUIRE_ALL_TOOLS 1", "commid": "rust_pr_74163"}], "negative_passages": []}
{"query_id": "q-en-rust-bf57e7b18ed8b9ab5e27682bd209117a0b094e52120110102ca62034de171272", "query": "The docs currently claim that the minimum Linux kernel version for is 2.6.18 (released September 20th, 2006). This is because RHEL 5 used that kernel version. However, RHEL 5 entered ELS on March 31, 2017. Should we continue to support RHEL 5 for , or should we increase the minimum Linux Kernel version to 2.6.27 (2nd LTS) or 2.6.32 (RHEL 6, 3rd LTS)? . Even bumping the min-version to 2.6.27 would allow us to remove most of the Linux-specific hacks in . Example: .\nGiven that RHEL is the only reason we keep comically old kernel versions around, I would propose that Rust only support RHEL until Maintenance Support ends. This is (essentially) what we already did for RHEL 4. Rust never supported RHEL 4, and back when RHEL 5 still had maintenance support. It would be nice to get someone from Red Hat or a RHEL customer to comment. This policy would allow us to increment the minimum kernel from 2.6.18 to 2.6.32 (and remove a lot of Linux-specific hacks). Note that RHEL has a weird (and very long) support system for RHEL 4, 5, and 6 (sources: and ). It has 5 major steps: Full Support Normal support of the OS Maintenance Support 1 Bug/Security fixes Limited hardware refresh Maintenance Support 2 Bug/Security fixes The end of this phase is considered \"Product retirement\" Extended Life Cycle Support (ELS) Additional paid product from Red Hat Gives updates for a longer period of time No additional releases/images Extended Life Phase (ELP) No more updates Limited Technical Support End date not given by Red Hat Current status of RHEL versions: RHEL 4 Not supported by Rust Currently in ELP (no end date specified) ELS ended March 31, 2017 RHEL 5 Supported by Rust Currently in ELS Maintenance Support ended March 31, 2017 ELS ends November 30, 2020 RHEL 6 Supported by Rust Currently in Maintenance Support 2 Maintenance Support ends November 30, 2020\ncc which I think they could be related.\nIt also may be worth to drop support of Windows XP and Vista (especially considering that panics are broken on XP since June 2016, see: ). Previously it was discussed . cc\nBesides , which other things would we be able to clean up?\nGood question! There are 6 workarounds in for older Linux versions (that I could find). Increasing the minimum version to 2.6.32 (aka 3rd Kernel LTS, aka RHEL 6) would fix 5 of them. Code links are inline: (mentioned above, in 2.6.23) ( in 2.6.24, there is also the mention of a bug occuring on \"some linux kernel at some point\") to atomically set the flag on the pipe fds ( in 2.6.27) ( in 2.6.27) to permit use of ( in 2.6.28) ( in 4.5, not fixed by this proposal) As you can see, the workarounds fixed by this proposal all have a similar flavor.\nI am Red Hat's maintainer for Rust on RHEL -- thanks for the cc. I try to keep an eye on new issues, but this one slipped past me. Red Hat only ships the Rust toolchain to customers for RHEL 7 and RHEL 8. If our customers would like to use Rust on older RHEL, they can do so via , and we'll support them in the same way we would for any other third-party software. Internally we do also build and use on RHEL 6, mostly because it's needed to ship Firefox updates. This is where it gets a little hairy, because each RHEL N is bootstrapped on RHEL N-1 and often kept that way -- meaning a lot of our RHEL 6 builders are still running on a RHEL 5 kernel. I would have to apply local workarounds if upstream Rust loses the ability to run on 2.6.18 kernels. We prefer to keep fixes upstream as much as possible, both for open sharing and to avoid bitrot. Is there much of a benefit to removing the current workarounds? I agree all of that code is annoying and cumbersome, but it's already written, and as far as I know it hasn't been a maintenance burden. Do you see otherwise? If there are any known issues with such Linux compatibility code, I am definitely willing to take assignment for fixing them.\n(Not a Rust maintainer, but I'm a Rust user and maintain a lot of other OSS software so I have well-developed feelings around supporting Very Old Software :-)) Do you have any sense of on what timeline Rust would be able to drop support for 2.6.18 kernels without causing you pain? In general I don't think people mind supporting configurations that have users and are painful to work around, but needing to support them for forever is a bitter pill to swallow! Particularly as they get harder and harder to test over time (already I have no idea how to test on really old kernels besides building it myself). So if there was an estimate \"we'd like to be able to support this until Q2 2020\", even if it's not set in stone, I think that would be very helpful!\nThe other benefit would be that Rust wouldn't have to use CentOS 5 for CI, which means we don't have to patch LLVM (and Emscripten LLVM) to compile on those systems. Of course that's also fairly limited in scope.\nAs soon as we stop shipping Firefox and Thunderbird updates on RHEL 6, I won't need Rust there anymore. AFAIK this does correspond to the end of Maintenance Support, November 30, 2020. Then I'll be on to RHEL 7 builders as my minimum, probably still with some RHEL 6 2.6.32 kernels involved. It should be fine for me if we update CI to CentOS 6. This is mostly concerned with how the release binaries are linked to GLIBC symbol versions, which is all in userspace. It's a broader community question whether any other Rust users still care about running on RHEL or CentOS 5. (Small caveat - glibc support for a symbol can still return , .)\nI noticed that Red Hat also has Extended Life Cycle Support (ELS) for RHEL 6 until June 30, 2024. Will you need RHEL 5 to work during this time? I don't know how ELS works with Firefox updates. Also, is there any reason RHEL 4 support wasn't an issue prior to March 31, 2017 (while RHEL 5 was still normally supported)? This issue came up for me when dealing with opening files in , see for more info. No single RHEL 5 issue is that bad, it's mainly just the sum of a bunch of tiny issues.\nI'm not on that team, but AFAIK we don't ship Firefox updates for ELS. The last build I see for RHEL 5 was . Maybe there could be an exception for a severe security issue, but I really doubt we'd rebase to newer Firefox for that, which means Rust requirements won't change. New Firefox ESR versions do require a newer Rust toolchain too, which is generally why we have to keep up. Otherwise we could just freeze some older compat rustc while upstream moves on. Rust wasn't required until Firefox 52.\nthat makes perfect sense to me, thanks for clarifying. So the proposed policy would be: Support RHEL until RHEL is retired (i.e. ends normal support). This would mean: Supporting RHEL 5 (v2.6.18) until November 30, 2020 Supporting RHEL 6 (v2.6.32) until June 30, 2024 Supporting RHEL 7 (v3.10) until May, 2029 This is a much longer support tail than any other Linux distros (that I know of), so it would also be the effective minimum kernel version of , , and their dependencies. How does that sound to people? EDIT: The alternative to this would be Support RHEL until it is retired, taking the table above and incrementing the RHEL versions by 1. It would also mean being able to drop RHEL 5 support now.\nSure, that's ideal for me. :smile: As mentioned, is just needed for kernel support, as far as I'm concerned. Userspace concerns like GLIBC symbol versions can stick to currently supported for my needs.\nRed Hat should backport the features we need (CLOEXEC and getrandom, in particular) to whatever kernels that it wants Rust to support. I don't know any good reason why they don't do so, other than it's cheaper to convince the whole world to keep supporting older kernel versions than it is to do the backports. We should change that dynamic.\nThe alternative to this would be Support RHEL N until it is retired I think we should not officially support retired OS versions (well, maybe with some grace period), so I prefer this option. Supporting 14 year old kernels seems a bit too much to me. each RHEL N is bootstrapped on RHEL N-1 and often kept that way -- meaning a lot of our RHEL 6 builders are still running on a RHEL 5 kernel Is it possible to apply kernel patches to those builders to add support for functionality like CLOEXEC? Or build Firefox separately on RHEL 6?\nI think it won't be fruitful for us to debate the merits of stable enterprise kernels, but no, this is not a possibility. An alternative take is \"Red Hat should be responsible for the work in Rust to maintain the support they want\" -- here I am, ready and willing. I definitely can't change those kernels. The ones that are stuck on N-1 are precisely to avoid rocking the boat, and backporting features is a big change. Some of our arches do eventually update the builders to their matching N kernel, but some don't, and I don't know all the reasons. If the workarounds are removed, then I will have to reapply them in our own builds. This assumes I keep using only our own binaries -- if I ever have to re-bootstrap from upstream binaries for stage0, I would also have to use some interception (like ) to hack in the workarounds. This is all possible, but much worse than the status quo of maintaining support here.\nI think this sounds reasonable. We aren't going to be able to update these kernels, it's just a question of when we would drop support for these OSes. Especially given that we wont need RHEL 5 CI support, I think that leaving things as they are is fine. I opened this issue to verify two things: 1) That we weren't intending on supporting RHEL 5 \"forever\", and had a clear date when to drop support. 2) That someone from Red Hat actively cared about supporting these older kernels. It looks like both these things are true. We should remove RHEL 5 workarounds after November 30, 2020. This issue can be postponed until then. EDIT: For our use case in , we were able to add compatibility for RHEL 5 .\nSee for a recent case where supporting such ancient kernels required extra manual work ( to the 2 lines mentioned by above). I am not sure how often that comes up, but probably the hardest part is to even notice that it happens -- and even then those codepaths will likely linger untested.\nYes, but this will still be true in general. If the CI kernel is newer than whatever kernel baseline we choose, it will be possible for newer syscalls to pass undetected. I don't know if we have any control over that -- can we choose a particular VM image? So that leaves us with a stated policy of support. The more that developers are aware of such issues, the better, but I'm the one most likely to notice when my build actually fails. If I come back with a fix, I at least need the project to be receptive. I haven't been in the habit of building nightly on our RHEL6 builders, but maybe I should! Catching this early is better for everyone involved...\nThat seems reasonable. How quick is the Rust related bootstrapping? Could it be run automatically (or more frequently)? This is an interesting point. If I hadn't made that change, would have kept working on RHEL 5, it just would have leaked an FD. I'm not sure how we would even test for this sort of thing. This isn't a RHEL 5 specific concern, I just don't know how Rust tests these types of resource leak issues in general.\nAbout 2 hours. Automation is harder, since I believe our build system needs my credentials, but I'll look into what is possible here. Yeah, that's a lot more subtle than an !", "positive_passages": [{"docid": "doc-en-rust-b335b72fe572704d5ea3ce746b5234372c25c1a7d77b184ce379f85067b347ee", "text": " FROM centos:5 # We use Debian 6 (glibc 2.11, kernel 2.6.32) as a common base for other # distros that still need Rust support: RHEL 6 (glibc 2.12, kernel 2.6.32) and # SLES 11 SP4 (glibc 2.11, kernel 3.0). FROM debian:6 WORKDIR /build # Centos 5 is EOL and is no longer available from the usual mirrors, so switch # to http://vault.centos.org/ RUN sed -i 's/enabled=1/enabled=0/' /etc/yum/pluginconf.d/fastestmirror.conf RUN sed -i 's/mirrorlist/#mirrorlist/' /etc/yum.repos.d/*.repo RUN sed -i 's|#(baseurl.*)mirror.centos.org/centos/$releasever|1vault.centos.org/5.11|' /etc/yum.repos.d/*.repo # Debian 6 is EOL and no longer available from the usual mirrors, # so we'll need to switch to http://archive.debian.org/ RUN sed -i '/updates/d' /etc/apt/sources.list && sed -i 's/httpredir/archive/' /etc/apt/sources.list RUN yum upgrade -y && yum install -y curl RUN apt-get update && apt-get install --allow-unauthenticated -y --no-install-recommends automake bzip2 ca-certificates curl file g++ g++-multilib gcc gcc-c++ gcc-multilib git lib32z1-dev libedit-dev libncurses-dev make glibc-devel patch perl zlib-devel file xz which pkgconfig pkg-config unzip wget autoconf gettext xz-utils zlib1g-dev ENV PATH=/rustroot/bin:$PATH ENV LD_LIBRARY_PATH=/rustroot/lib64:/rustroot/lib ENV LD_LIBRARY_PATH=/rustroot/lib64:/rustroot/lib32:/rustroot/lib ENV PKG_CONFIG_PATH=/rustroot/lib/pkgconfig WORKDIR /tmp RUN mkdir /home/user COPY host-x86_64/dist-x86_64-linux/shared.sh /tmp/ # We need a build of openssl which supports SNI to download artifacts from", "commid": "rust_pr_74163"}], "negative_passages": []}
{"query_id": "q-en-rust-bf57e7b18ed8b9ab5e27682bd209117a0b094e52120110102ca62034de171272", "query": "The docs currently claim that the minimum Linux kernel version for is 2.6.18 (released September 20th, 2006). This is because RHEL 5 used that kernel version. However, RHEL 5 entered ELS on March 31, 2017. Should we continue to support RHEL 5 for , or should we increase the minimum Linux Kernel version to 2.6.27 (2nd LTS) or 2.6.32 (RHEL 6, 3rd LTS)? . Even bumping the min-version to 2.6.27 would allow us to remove most of the Linux-specific hacks in . Example: .\nGiven that RHEL is the only reason we keep comically old kernel versions around, I would propose that Rust only support RHEL until Maintenance Support ends. This is (essentially) what we already did for RHEL 4. Rust never supported RHEL 4, and back when RHEL 5 still had maintenance support. It would be nice to get someone from Red Hat or a RHEL customer to comment. This policy would allow us to increment the minimum kernel from 2.6.18 to 2.6.32 (and remove a lot of Linux-specific hacks). Note that RHEL has a weird (and very long) support system for RHEL 4, 5, and 6 (sources: and ). It has 5 major steps: Full Support Normal support of the OS Maintenance Support 1 Bug/Security fixes Limited hardware refresh Maintenance Support 2 Bug/Security fixes The end of this phase is considered \"Product retirement\" Extended Life Cycle Support (ELS) Additional paid product from Red Hat Gives updates for a longer period of time No additional releases/images Extended Life Phase (ELP) No more updates Limited Technical Support End date not given by Red Hat Current status of RHEL versions: RHEL 4 Not supported by Rust Currently in ELP (no end date specified) ELS ended March 31, 2017 RHEL 5 Supported by Rust Currently in ELS Maintenance Support ended March 31, 2017 ELS ends November 30, 2020 RHEL 6 Supported by Rust Currently in Maintenance Support 2 Maintenance Support ends November 30, 2020\ncc which I think they could be related.\nIt also may be worth to drop support of Windows XP and Vista (especially considering that panics are broken on XP since June 2016, see: ). Previously it was discussed . cc\nBesides , which other things would we be able to clean up?\nGood question! There are 6 workarounds in for older Linux versions (that I could find). Increasing the minimum version to 2.6.32 (aka 3rd Kernel LTS, aka RHEL 6) would fix 5 of them. Code links are inline: (mentioned above, in 2.6.23) ( in 2.6.24, there is also the mention of a bug occuring on \"some linux kernel at some point\") to atomically set the flag on the pipe fds ( in 2.6.27) ( in 2.6.27) to permit use of ( in 2.6.28) ( in 4.5, not fixed by this proposal) As you can see, the workarounds fixed by this proposal all have a similar flavor.\nI am Red Hat's maintainer for Rust on RHEL -- thanks for the cc. I try to keep an eye on new issues, but this one slipped past me. Red Hat only ships the Rust toolchain to customers for RHEL 7 and RHEL 8. If our customers would like to use Rust on older RHEL, they can do so via , and we'll support them in the same way we would for any other third-party software. Internally we do also build and use on RHEL 6, mostly because it's needed to ship Firefox updates. This is where it gets a little hairy, because each RHEL N is bootstrapped on RHEL N-1 and often kept that way -- meaning a lot of our RHEL 6 builders are still running on a RHEL 5 kernel. I would have to apply local workarounds if upstream Rust loses the ability to run on 2.6.18 kernels. We prefer to keep fixes upstream as much as possible, both for open sharing and to avoid bitrot. Is there much of a benefit to removing the current workarounds? I agree all of that code is annoying and cumbersome, but it's already written, and as far as I know it hasn't been a maintenance burden. Do you see otherwise? If there are any known issues with such Linux compatibility code, I am definitely willing to take assignment for fixing them.\n(Not a Rust maintainer, but I'm a Rust user and maintain a lot of other OSS software so I have well-developed feelings around supporting Very Old Software :-)) Do you have any sense of on what timeline Rust would be able to drop support for 2.6.18 kernels without causing you pain? In general I don't think people mind supporting configurations that have users and are painful to work around, but needing to support them for forever is a bitter pill to swallow! Particularly as they get harder and harder to test over time (already I have no idea how to test on really old kernels besides building it myself). So if there was an estimate \"we'd like to be able to support this until Q2 2020\", even if it's not set in stone, I think that would be very helpful!\nThe other benefit would be that Rust wouldn't have to use CentOS 5 for CI, which means we don't have to patch LLVM (and Emscripten LLVM) to compile on those systems. Of course that's also fairly limited in scope.\nAs soon as we stop shipping Firefox and Thunderbird updates on RHEL 6, I won't need Rust there anymore. AFAIK this does correspond to the end of Maintenance Support, November 30, 2020. Then I'll be on to RHEL 7 builders as my minimum, probably still with some RHEL 6 2.6.32 kernels involved. It should be fine for me if we update CI to CentOS 6. This is mostly concerned with how the release binaries are linked to GLIBC symbol versions, which is all in userspace. It's a broader community question whether any other Rust users still care about running on RHEL or CentOS 5. (Small caveat - glibc support for a symbol can still return , .)\nI noticed that Red Hat also has Extended Life Cycle Support (ELS) for RHEL 6 until June 30, 2024. Will you need RHEL 5 to work during this time? I don't know how ELS works with Firefox updates. Also, is there any reason RHEL 4 support wasn't an issue prior to March 31, 2017 (while RHEL 5 was still normally supported)? This issue came up for me when dealing with opening files in , see for more info. No single RHEL 5 issue is that bad, it's mainly just the sum of a bunch of tiny issues.\nI'm not on that team, but AFAIK we don't ship Firefox updates for ELS. The last build I see for RHEL 5 was . Maybe there could be an exception for a severe security issue, but I really doubt we'd rebase to newer Firefox for that, which means Rust requirements won't change. New Firefox ESR versions do require a newer Rust toolchain too, which is generally why we have to keep up. Otherwise we could just freeze some older compat rustc while upstream moves on. Rust wasn't required until Firefox 52.\nthat makes perfect sense to me, thanks for clarifying. So the proposed policy would be: Support RHEL until RHEL is retired (i.e. ends normal support). This would mean: Supporting RHEL 5 (v2.6.18) until November 30, 2020 Supporting RHEL 6 (v2.6.32) until June 30, 2024 Supporting RHEL 7 (v3.10) until May, 2029 This is a much longer support tail than any other Linux distros (that I know of), so it would also be the effective minimum kernel version of , , and their dependencies. How does that sound to people? EDIT: The alternative to this would be Support RHEL until it is retired, taking the table above and incrementing the RHEL versions by 1. It would also mean being able to drop RHEL 5 support now.\nSure, that's ideal for me. :smile: As mentioned, is just needed for kernel support, as far as I'm concerned. Userspace concerns like GLIBC symbol versions can stick to currently supported for my needs.\nRed Hat should backport the features we need (CLOEXEC and getrandom, in particular) to whatever kernels that it wants Rust to support. I don't know any good reason why they don't do so, other than it's cheaper to convince the whole world to keep supporting older kernel versions than it is to do the backports. We should change that dynamic.\nThe alternative to this would be Support RHEL N until it is retired I think we should not officially support retired OS versions (well, maybe with some grace period), so I prefer this option. Supporting 14 year old kernels seems a bit too much to me. each RHEL N is bootstrapped on RHEL N-1 and often kept that way -- meaning a lot of our RHEL 6 builders are still running on a RHEL 5 kernel Is it possible to apply kernel patches to those builders to add support for functionality like CLOEXEC? Or build Firefox separately on RHEL 6?\nI think it won't be fruitful for us to debate the merits of stable enterprise kernels, but no, this is not a possibility. An alternative take is \"Red Hat should be responsible for the work in Rust to maintain the support they want\" -- here I am, ready and willing. I definitely can't change those kernels. The ones that are stuck on N-1 are precisely to avoid rocking the boat, and backporting features is a big change. Some of our arches do eventually update the builders to their matching N kernel, but some don't, and I don't know all the reasons. If the workarounds are removed, then I will have to reapply them in our own builds. This assumes I keep using only our own binaries -- if I ever have to re-bootstrap from upstream binaries for stage0, I would also have to use some interception (like ) to hack in the workarounds. This is all possible, but much worse than the status quo of maintaining support here.\nI think this sounds reasonable. We aren't going to be able to update these kernels, it's just a question of when we would drop support for these OSes. Especially given that we wont need RHEL 5 CI support, I think that leaving things as they are is fine. I opened this issue to verify two things: 1) That we weren't intending on supporting RHEL 5 \"forever\", and had a clear date when to drop support. 2) That someone from Red Hat actively cared about supporting these older kernels. It looks like both these things are true. We should remove RHEL 5 workarounds after November 30, 2020. This issue can be postponed until then. EDIT: For our use case in , we were able to add compatibility for RHEL 5 .\nSee for a recent case where supporting such ancient kernels required extra manual work ( to the 2 lines mentioned by above). I am not sure how often that comes up, but probably the hardest part is to even notice that it happens -- and even then those codepaths will likely linger untested.\nYes, but this will still be true in general. If the CI kernel is newer than whatever kernel baseline we choose, it will be possible for newer syscalls to pass undetected. I don't know if we have any control over that -- can we choose a particular VM image? So that leaves us with a stated policy of support. The more that developers are aware of such issues, the better, but I'm the one most likely to notice when my build actually fails. If I come back with a fix, I at least need the project to be receptive. I haven't been in the habit of building nightly on our RHEL6 builders, but maybe I should! Catching this early is better for everyone involved...\nThat seems reasonable. How quick is the Rust related bootstrapping? Could it be run automatically (or more frequently)? This is an interesting point. If I hadn't made that change, would have kept working on RHEL 5, it just would have leaked an FD. I'm not sure how we would even test for this sort of thing. This isn't a RHEL 5 specific concern, I just don't know how Rust tests these types of resource leak issues in general.\nAbout 2 hours. Automation is harder, since I believe our build system needs my credentials, but I'll look into what is possible here. Yeah, that's a lot more subtle than an !", "positive_passages": [{"docid": "doc-en-rust-656373ef21fd754284e0e19da4ed3494db327b81ae20b0ffbb7be8f712d84cac", "text": "COPY host-x86_64/dist-x86_64-linux/build-openssl.sh /tmp/ RUN ./build-openssl.sh # The `curl` binary on CentOS doesn't support SNI which is needed for fetching # The `curl` binary on Debian 6 doesn't support SNI which is needed for fetching # some https urls we have, so install a new version of libcurl + curl which is # using the openssl we just built previously. # # Note that we also disable a bunch of optional features of curl that we don't # really need. COPY host-x86_64/dist-x86_64-linux/build-curl.sh /tmp/ RUN ./build-curl.sh RUN ./build-curl.sh && apt-get remove -y curl # binutils < 2.22 has a bug where the 32-bit executables it generates # immediately segfault in Rust, so we need to install our own binutils.", "commid": "rust_pr_74163"}], "negative_passages": []}
{"query_id": "q-en-rust-bf57e7b18ed8b9ab5e27682bd209117a0b094e52120110102ca62034de171272", "query": "The docs currently claim that the minimum Linux kernel version for is 2.6.18 (released September 20th, 2006). This is because RHEL 5 used that kernel version. However, RHEL 5 entered ELS on March 31, 2017. Should we continue to support RHEL 5 for , or should we increase the minimum Linux Kernel version to 2.6.27 (2nd LTS) or 2.6.32 (RHEL 6, 3rd LTS)? . Even bumping the min-version to 2.6.27 would allow us to remove most of the Linux-specific hacks in . Example: .\nGiven that RHEL is the only reason we keep comically old kernel versions around, I would propose that Rust only support RHEL until Maintenance Support ends. This is (essentially) what we already did for RHEL 4. Rust never supported RHEL 4, and back when RHEL 5 still had maintenance support. It would be nice to get someone from Red Hat or a RHEL customer to comment. This policy would allow us to increment the minimum kernel from 2.6.18 to 2.6.32 (and remove a lot of Linux-specific hacks). Note that RHEL has a weird (and very long) support system for RHEL 4, 5, and 6 (sources: and ). It has 5 major steps: Full Support Normal support of the OS Maintenance Support 1 Bug/Security fixes Limited hardware refresh Maintenance Support 2 Bug/Security fixes The end of this phase is considered \"Product retirement\" Extended Life Cycle Support (ELS) Additional paid product from Red Hat Gives updates for a longer period of time No additional releases/images Extended Life Phase (ELP) No more updates Limited Technical Support End date not given by Red Hat Current status of RHEL versions: RHEL 4 Not supported by Rust Currently in ELP (no end date specified) ELS ended March 31, 2017 RHEL 5 Supported by Rust Currently in ELS Maintenance Support ended March 31, 2017 ELS ends November 30, 2020 RHEL 6 Supported by Rust Currently in Maintenance Support 2 Maintenance Support ends November 30, 2020\ncc which I think they could be related.\nIt also may be worth to drop support of Windows XP and Vista (especially considering that panics are broken on XP since June 2016, see: ). Previously it was discussed . cc\nBesides , which other things would we be able to clean up?\nGood question! There are 6 workarounds in for older Linux versions (that I could find). Increasing the minimum version to 2.6.32 (aka 3rd Kernel LTS, aka RHEL 6) would fix 5 of them. Code links are inline: (mentioned above, in 2.6.23) ( in 2.6.24, there is also the mention of a bug occuring on \"some linux kernel at some point\") to atomically set the flag on the pipe fds ( in 2.6.27) ( in 2.6.27) to permit use of ( in 2.6.28) ( in 4.5, not fixed by this proposal) As you can see, the workarounds fixed by this proposal all have a similar flavor.\nI am Red Hat's maintainer for Rust on RHEL -- thanks for the cc. I try to keep an eye on new issues, but this one slipped past me. Red Hat only ships the Rust toolchain to customers for RHEL 7 and RHEL 8. If our customers would like to use Rust on older RHEL, they can do so via , and we'll support them in the same way we would for any other third-party software. Internally we do also build and use on RHEL 6, mostly because it's needed to ship Firefox updates. This is where it gets a little hairy, because each RHEL N is bootstrapped on RHEL N-1 and often kept that way -- meaning a lot of our RHEL 6 builders are still running on a RHEL 5 kernel. I would have to apply local workarounds if upstream Rust loses the ability to run on 2.6.18 kernels. We prefer to keep fixes upstream as much as possible, both for open sharing and to avoid bitrot. Is there much of a benefit to removing the current workarounds? I agree all of that code is annoying and cumbersome, but it's already written, and as far as I know it hasn't been a maintenance burden. Do you see otherwise? If there are any known issues with such Linux compatibility code, I am definitely willing to take assignment for fixing them.\n(Not a Rust maintainer, but I'm a Rust user and maintain a lot of other OSS software so I have well-developed feelings around supporting Very Old Software :-)) Do you have any sense of on what timeline Rust would be able to drop support for 2.6.18 kernels without causing you pain? In general I don't think people mind supporting configurations that have users and are painful to work around, but needing to support them for forever is a bitter pill to swallow! Particularly as they get harder and harder to test over time (already I have no idea how to test on really old kernels besides building it myself). So if there was an estimate \"we'd like to be able to support this until Q2 2020\", even if it's not set in stone, I think that would be very helpful!\nThe other benefit would be that Rust wouldn't have to use CentOS 5 for CI, which means we don't have to patch LLVM (and Emscripten LLVM) to compile on those systems. Of course that's also fairly limited in scope.\nAs soon as we stop shipping Firefox and Thunderbird updates on RHEL 6, I won't need Rust there anymore. AFAIK this does correspond to the end of Maintenance Support, November 30, 2020. Then I'll be on to RHEL 7 builders as my minimum, probably still with some RHEL 6 2.6.32 kernels involved. It should be fine for me if we update CI to CentOS 6. This is mostly concerned with how the release binaries are linked to GLIBC symbol versions, which is all in userspace. It's a broader community question whether any other Rust users still care about running on RHEL or CentOS 5. (Small caveat - glibc support for a symbol can still return , .)\nI noticed that Red Hat also has Extended Life Cycle Support (ELS) for RHEL 6 until June 30, 2024. Will you need RHEL 5 to work during this time? I don't know how ELS works with Firefox updates. Also, is there any reason RHEL 4 support wasn't an issue prior to March 31, 2017 (while RHEL 5 was still normally supported)? This issue came up for me when dealing with opening files in , see for more info. No single RHEL 5 issue is that bad, it's mainly just the sum of a bunch of tiny issues.\nI'm not on that team, but AFAIK we don't ship Firefox updates for ELS. The last build I see for RHEL 5 was . Maybe there could be an exception for a severe security issue, but I really doubt we'd rebase to newer Firefox for that, which means Rust requirements won't change. New Firefox ESR versions do require a newer Rust toolchain too, which is generally why we have to keep up. Otherwise we could just freeze some older compat rustc while upstream moves on. Rust wasn't required until Firefox 52.\nthat makes perfect sense to me, thanks for clarifying. So the proposed policy would be: Support RHEL until RHEL is retired (i.e. ends normal support). This would mean: Supporting RHEL 5 (v2.6.18) until November 30, 2020 Supporting RHEL 6 (v2.6.32) until June 30, 2024 Supporting RHEL 7 (v3.10) until May, 2029 This is a much longer support tail than any other Linux distros (that I know of), so it would also be the effective minimum kernel version of , , and their dependencies. How does that sound to people? EDIT: The alternative to this would be Support RHEL until it is retired, taking the table above and incrementing the RHEL versions by 1. It would also mean being able to drop RHEL 5 support now.\nSure, that's ideal for me. :smile: As mentioned, is just needed for kernel support, as far as I'm concerned. Userspace concerns like GLIBC symbol versions can stick to currently supported for my needs.\nRed Hat should backport the features we need (CLOEXEC and getrandom, in particular) to whatever kernels that it wants Rust to support. I don't know any good reason why they don't do so, other than it's cheaper to convince the whole world to keep supporting older kernel versions than it is to do the backports. We should change that dynamic.\nThe alternative to this would be Support RHEL N until it is retired I think we should not officially support retired OS versions (well, maybe with some grace period), so I prefer this option. Supporting 14 year old kernels seems a bit too much to me. each RHEL N is bootstrapped on RHEL N-1 and often kept that way -- meaning a lot of our RHEL 6 builders are still running on a RHEL 5 kernel Is it possible to apply kernel patches to those builders to add support for functionality like CLOEXEC? Or build Firefox separately on RHEL 6?\nI think it won't be fruitful for us to debate the merits of stable enterprise kernels, but no, this is not a possibility. An alternative take is \"Red Hat should be responsible for the work in Rust to maintain the support they want\" -- here I am, ready and willing. I definitely can't change those kernels. The ones that are stuck on N-1 are precisely to avoid rocking the boat, and backporting features is a big change. Some of our arches do eventually update the builders to their matching N kernel, but some don't, and I don't know all the reasons. If the workarounds are removed, then I will have to reapply them in our own builds. This assumes I keep using only our own binaries -- if I ever have to re-bootstrap from upstream binaries for stage0, I would also have to use some interception (like ) to hack in the workarounds. This is all possible, but much worse than the status quo of maintaining support here.\nI think this sounds reasonable. We aren't going to be able to update these kernels, it's just a question of when we would drop support for these OSes. Especially given that we wont need RHEL 5 CI support, I think that leaving things as they are is fine. I opened this issue to verify two things: 1) That we weren't intending on supporting RHEL 5 \"forever\", and had a clear date when to drop support. 2) That someone from Red Hat actively cared about supporting these older kernels. It looks like both these things are true. We should remove RHEL 5 workarounds after November 30, 2020. This issue can be postponed until then. EDIT: For our use case in , we were able to add compatibility for RHEL 5 .\nSee for a recent case where supporting such ancient kernels required extra manual work ( to the 2 lines mentioned by above). I am not sure how often that comes up, but probably the hardest part is to even notice that it happens -- and even then those codepaths will likely linger untested.\nYes, but this will still be true in general. If the CI kernel is newer than whatever kernel baseline we choose, it will be possible for newer syscalls to pass undetected. I don't know if we have any control over that -- can we choose a particular VM image? So that leaves us with a stated policy of support. The more that developers are aware of such issues, the better, but I'm the one most likely to notice when my build actually fails. If I come back with a fix, I at least need the project to be receptive. I haven't been in the habit of building nightly on our RHEL6 builders, but maybe I should! Catching this early is better for everyone involved...\nThat seems reasonable. How quick is the Rust related bootstrapping? Could it be run automatically (or more frequently)? This is an interesting point. If I hadn't made that change, would have kept working on RHEL 5, it just would have leaked an FD. I'm not sure how we would even test for this sort of thing. This isn't a RHEL 5 specific concern, I just don't know how Rust tests these types of resource leak issues in general.\nAbout 2 hours. Automation is harder, since I believe our build system needs my credentials, but I'll look into what is possible here. Yeah, that's a lot more subtle than an !", "positive_passages": [{"docid": "doc-en-rust-105fee5e8cff6ca966c8e7276a2157c592068a11adf4dd26a5bbd829cba828a2", "text": "COPY host-x86_64/dist-x86_64-linux/build-binutils.sh /tmp/ RUN ./build-binutils.sh # libssh2 (a dependency of Cargo) requires cmake 2.8.11 or higher but CentOS # only has 2.6.4, so build our own COPY host-x86_64/dist-x86_64-linux/build-cmake.sh /tmp/ RUN ./build-cmake.sh # Build a version of gcc capable of building LLVM 6 # Need at least GCC 5.1 to compile LLVM nowadays COPY host-x86_64/dist-x86_64-linux/build-gcc.sh /tmp/ RUN ./build-gcc.sh RUN ./build-gcc.sh && apt-get remove -y gcc g++ # CentOS 5.5 has Python 2.4 by default, but LLVM needs 2.7+ # Debian 6 has Python 2.6 by default, but LLVM needs 2.7+ COPY host-x86_64/dist-x86_64-linux/build-python.sh /tmp/ RUN ./build-python.sh # Now build LLVM+Clang 7, afterwards configuring further compilations to use the # LLVM needs cmake 3.4.3 or higher, and is planning to raise to 3.13.4. COPY host-x86_64/dist-x86_64-linux/build-cmake.sh /tmp/ RUN ./build-cmake.sh # Now build LLVM+Clang, afterwards configuring further compilations to use the # clang/clang++ compilers. COPY host-x86_64/dist-x86_64-linux/build-clang.sh host-x86_64/dist-x86_64-linux/llvm-project-centos.patch /tmp/ COPY host-x86_64/dist-x86_64-linux/build-clang.sh /tmp/ RUN ./build-clang.sh ENV CC=clang CXX=clang++ # Apparently CentOS 5.5 desn't have `git` in yum, but we're gonna need it for # cloning, so download and build it here. COPY host-x86_64/dist-x86_64-linux/build-git.sh /tmp/ RUN ./build-git.sh # for sanitizers, we need kernel headers files newer than the ones CentOS ships # with so we install newer ones here COPY host-x86_64/dist-x86_64-linux/build-headers.sh /tmp/ RUN ./build-headers.sh # OpenSSL requires a more recent version of perl # with so we install newer ones here COPY host-x86_64/dist-x86_64-linux/build-perl.sh /tmp/ RUN ./build-perl.sh COPY scripts/sccache.sh /scripts/ RUN sh /scripts/sccache.sh", "commid": "rust_pr_74163"}], "negative_passages": []}
{"query_id": "q-en-rust-bf57e7b18ed8b9ab5e27682bd209117a0b094e52120110102ca62034de171272", "query": "The docs currently claim that the minimum Linux kernel version for is 2.6.18 (released September 20th, 2006). This is because RHEL 5 used that kernel version. However, RHEL 5 entered ELS on March 31, 2017. Should we continue to support RHEL 5 for , or should we increase the minimum Linux Kernel version to 2.6.27 (2nd LTS) or 2.6.32 (RHEL 6, 3rd LTS)? . Even bumping the min-version to 2.6.27 would allow us to remove most of the Linux-specific hacks in . Example: .\nGiven that RHEL is the only reason we keep comically old kernel versions around, I would propose that Rust only support RHEL until Maintenance Support ends. This is (essentially) what we already did for RHEL 4. Rust never supported RHEL 4, and back when RHEL 5 still had maintenance support. It would be nice to get someone from Red Hat or a RHEL customer to comment. This policy would allow us to increment the minimum kernel from 2.6.18 to 2.6.32 (and remove a lot of Linux-specific hacks). Note that RHEL has a weird (and very long) support system for RHEL 4, 5, and 6 (sources: and ). It has 5 major steps: Full Support Normal support of the OS Maintenance Support 1 Bug/Security fixes Limited hardware refresh Maintenance Support 2 Bug/Security fixes The end of this phase is considered \"Product retirement\" Extended Life Cycle Support (ELS) Additional paid product from Red Hat Gives updates for a longer period of time No additional releases/images Extended Life Phase (ELP) No more updates Limited Technical Support End date not given by Red Hat Current status of RHEL versions: RHEL 4 Not supported by Rust Currently in ELP (no end date specified) ELS ended March 31, 2017 RHEL 5 Supported by Rust Currently in ELS Maintenance Support ended March 31, 2017 ELS ends November 30, 2020 RHEL 6 Supported by Rust Currently in Maintenance Support 2 Maintenance Support ends November 30, 2020\ncc which I think they could be related.\nIt also may be worth to drop support of Windows XP and Vista (especially considering that panics are broken on XP since June 2016, see: ). Previously it was discussed . cc\nBesides , which other things would we be able to clean up?\nGood question! There are 6 workarounds in for older Linux versions (that I could find). Increasing the minimum version to 2.6.32 (aka 3rd Kernel LTS, aka RHEL 6) would fix 5 of them. Code links are inline: (mentioned above, in 2.6.23) ( in 2.6.24, there is also the mention of a bug occuring on \"some linux kernel at some point\") to atomically set the flag on the pipe fds ( in 2.6.27) ( in 2.6.27) to permit use of ( in 2.6.28) ( in 4.5, not fixed by this proposal) As you can see, the workarounds fixed by this proposal all have a similar flavor.\nI am Red Hat's maintainer for Rust on RHEL -- thanks for the cc. I try to keep an eye on new issues, but this one slipped past me. Red Hat only ships the Rust toolchain to customers for RHEL 7 and RHEL 8. If our customers would like to use Rust on older RHEL, they can do so via , and we'll support them in the same way we would for any other third-party software. Internally we do also build and use on RHEL 6, mostly because it's needed to ship Firefox updates. This is where it gets a little hairy, because each RHEL N is bootstrapped on RHEL N-1 and often kept that way -- meaning a lot of our RHEL 6 builders are still running on a RHEL 5 kernel. I would have to apply local workarounds if upstream Rust loses the ability to run on 2.6.18 kernels. We prefer to keep fixes upstream as much as possible, both for open sharing and to avoid bitrot. Is there much of a benefit to removing the current workarounds? I agree all of that code is annoying and cumbersome, but it's already written, and as far as I know it hasn't been a maintenance burden. Do you see otherwise? If there are any known issues with such Linux compatibility code, I am definitely willing to take assignment for fixing them.\n(Not a Rust maintainer, but I'm a Rust user and maintain a lot of other OSS software so I have well-developed feelings around supporting Very Old Software :-)) Do you have any sense of on what timeline Rust would be able to drop support for 2.6.18 kernels without causing you pain? In general I don't think people mind supporting configurations that have users and are painful to work around, but needing to support them for forever is a bitter pill to swallow! Particularly as they get harder and harder to test over time (already I have no idea how to test on really old kernels besides building it myself). So if there was an estimate \"we'd like to be able to support this until Q2 2020\", even if it's not set in stone, I think that would be very helpful!\nThe other benefit would be that Rust wouldn't have to use CentOS 5 for CI, which means we don't have to patch LLVM (and Emscripten LLVM) to compile on those systems. Of course that's also fairly limited in scope.\nAs soon as we stop shipping Firefox and Thunderbird updates on RHEL 6, I won't need Rust there anymore. AFAIK this does correspond to the end of Maintenance Support, November 30, 2020. Then I'll be on to RHEL 7 builders as my minimum, probably still with some RHEL 6 2.6.32 kernels involved. It should be fine for me if we update CI to CentOS 6. This is mostly concerned with how the release binaries are linked to GLIBC symbol versions, which is all in userspace. It's a broader community question whether any other Rust users still care about running on RHEL or CentOS 5. (Small caveat - glibc support for a symbol can still return , .)\nI noticed that Red Hat also has Extended Life Cycle Support (ELS) for RHEL 6 until June 30, 2024. Will you need RHEL 5 to work during this time? I don't know how ELS works with Firefox updates. Also, is there any reason RHEL 4 support wasn't an issue prior to March 31, 2017 (while RHEL 5 was still normally supported)? This issue came up for me when dealing with opening files in , see for more info. No single RHEL 5 issue is that bad, it's mainly just the sum of a bunch of tiny issues.\nI'm not on that team, but AFAIK we don't ship Firefox updates for ELS. The last build I see for RHEL 5 was . Maybe there could be an exception for a severe security issue, but I really doubt we'd rebase to newer Firefox for that, which means Rust requirements won't change. New Firefox ESR versions do require a newer Rust toolchain too, which is generally why we have to keep up. Otherwise we could just freeze some older compat rustc while upstream moves on. Rust wasn't required until Firefox 52.\nthat makes perfect sense to me, thanks for clarifying. So the proposed policy would be: Support RHEL until RHEL is retired (i.e. ends normal support). This would mean: Supporting RHEL 5 (v2.6.18) until November 30, 2020 Supporting RHEL 6 (v2.6.32) until June 30, 2024 Supporting RHEL 7 (v3.10) until May, 2029 This is a much longer support tail than any other Linux distros (that I know of), so it would also be the effective minimum kernel version of , , and their dependencies. How does that sound to people? EDIT: The alternative to this would be Support RHEL until it is retired, taking the table above and incrementing the RHEL versions by 1. It would also mean being able to drop RHEL 5 support now.\nSure, that's ideal for me. :smile: As mentioned, is just needed for kernel support, as far as I'm concerned. Userspace concerns like GLIBC symbol versions can stick to currently supported for my needs.\nRed Hat should backport the features we need (CLOEXEC and getrandom, in particular) to whatever kernels that it wants Rust to support. I don't know any good reason why they don't do so, other than it's cheaper to convince the whole world to keep supporting older kernel versions than it is to do the backports. We should change that dynamic.\nThe alternative to this would be Support RHEL N until it is retired I think we should not officially support retired OS versions (well, maybe with some grace period), so I prefer this option. Supporting 14 year old kernels seems a bit too much to me. each RHEL N is bootstrapped on RHEL N-1 and often kept that way -- meaning a lot of our RHEL 6 builders are still running on a RHEL 5 kernel Is it possible to apply kernel patches to those builders to add support for functionality like CLOEXEC? Or build Firefox separately on RHEL 6?\nI think it won't be fruitful for us to debate the merits of stable enterprise kernels, but no, this is not a possibility. An alternative take is \"Red Hat should be responsible for the work in Rust to maintain the support they want\" -- here I am, ready and willing. I definitely can't change those kernels. The ones that are stuck on N-1 are precisely to avoid rocking the boat, and backporting features is a big change. Some of our arches do eventually update the builders to their matching N kernel, but some don't, and I don't know all the reasons. If the workarounds are removed, then I will have to reapply them in our own builds. This assumes I keep using only our own binaries -- if I ever have to re-bootstrap from upstream binaries for stage0, I would also have to use some interception (like ) to hack in the workarounds. This is all possible, but much worse than the status quo of maintaining support here.\nI think this sounds reasonable. We aren't going to be able to update these kernels, it's just a question of when we would drop support for these OSes. Especially given that we wont need RHEL 5 CI support, I think that leaving things as they are is fine. I opened this issue to verify two things: 1) That we weren't intending on supporting RHEL 5 \"forever\", and had a clear date when to drop support. 2) That someone from Red Hat actively cared about supporting these older kernels. It looks like both these things are true. We should remove RHEL 5 workarounds after November 30, 2020. This issue can be postponed until then. EDIT: For our use case in , we were able to add compatibility for RHEL 5 .\nSee for a recent case where supporting such ancient kernels required extra manual work ( to the 2 lines mentioned by above). I am not sure how often that comes up, but probably the hardest part is to even notice that it happens -- and even then those codepaths will likely linger untested.\nYes, but this will still be true in general. If the CI kernel is newer than whatever kernel baseline we choose, it will be possible for newer syscalls to pass undetected. I don't know if we have any control over that -- can we choose a particular VM image? So that leaves us with a stated policy of support. The more that developers are aware of such issues, the better, but I'm the one most likely to notice when my build actually fails. If I come back with a fix, I at least need the project to be receptive. I haven't been in the habit of building nightly on our RHEL6 builders, but maybe I should! Catching this early is better for everyone involved...\nThat seems reasonable. How quick is the Rust related bootstrapping? Could it be run automatically (or more frequently)? This is an interesting point. If I hadn't made that change, would have kept working on RHEL 5, it just would have leaked an FD. I'm not sure how we would even test for this sort of thing. This isn't a RHEL 5 specific concern, I just don't know how Rust tests these types of resource leak issues in general.\nAbout 2 hours. Automation is harder, since I believe our build system needs my credentials, but I'll look into what is possible here. Yeah, that's a lot more subtle than an !", "positive_passages": [{"docid": "doc-en-rust-a0e58996ffd8f8577fa88f608132d77e514da3cb89e702eccb49e1446ebad497", "text": "curl -L https://github.com/llvm/llvm-project/archive/$LLVM.tar.gz | tar xzf - --strip-components=1 yum install -y patch patch -Np1 < ../llvm-project-centos.patch mkdir clang-build cd clang-build", "commid": "rust_pr_74163"}], "negative_passages": []}
{"query_id": "q-en-rust-bf57e7b18ed8b9ab5e27682bd209117a0b094e52120110102ca62034de171272", "query": "The docs currently claim that the minimum Linux kernel version for is 2.6.18 (released September 20th, 2006). This is because RHEL 5 used that kernel version. However, RHEL 5 entered ELS on March 31, 2017. Should we continue to support RHEL 5 for , or should we increase the minimum Linux Kernel version to 2.6.27 (2nd LTS) or 2.6.32 (RHEL 6, 3rd LTS)? . Even bumping the min-version to 2.6.27 would allow us to remove most of the Linux-specific hacks in . Example: .\nGiven that RHEL is the only reason we keep comically old kernel versions around, I would propose that Rust only support RHEL until Maintenance Support ends. This is (essentially) what we already did for RHEL 4. Rust never supported RHEL 4, and back when RHEL 5 still had maintenance support. It would be nice to get someone from Red Hat or a RHEL customer to comment. This policy would allow us to increment the minimum kernel from 2.6.18 to 2.6.32 (and remove a lot of Linux-specific hacks). Note that RHEL has a weird (and very long) support system for RHEL 4, 5, and 6 (sources: and ). It has 5 major steps: Full Support Normal support of the OS Maintenance Support 1 Bug/Security fixes Limited hardware refresh Maintenance Support 2 Bug/Security fixes The end of this phase is considered \"Product retirement\" Extended Life Cycle Support (ELS) Additional paid product from Red Hat Gives updates for a longer period of time No additional releases/images Extended Life Phase (ELP) No more updates Limited Technical Support End date not given by Red Hat Current status of RHEL versions: RHEL 4 Not supported by Rust Currently in ELP (no end date specified) ELS ended March 31, 2017 RHEL 5 Supported by Rust Currently in ELS Maintenance Support ended March 31, 2017 ELS ends November 30, 2020 RHEL 6 Supported by Rust Currently in Maintenance Support 2 Maintenance Support ends November 30, 2020\ncc which I think they could be related.\nIt also may be worth to drop support of Windows XP and Vista (especially considering that panics are broken on XP since June 2016, see: ). Previously it was discussed . cc\nBesides , which other things would we be able to clean up?\nGood question! There are 6 workarounds in for older Linux versions (that I could find). Increasing the minimum version to 2.6.32 (aka 3rd Kernel LTS, aka RHEL 6) would fix 5 of them. Code links are inline: (mentioned above, in 2.6.23) ( in 2.6.24, there is also the mention of a bug occuring on \"some linux kernel at some point\") to atomically set the flag on the pipe fds ( in 2.6.27) ( in 2.6.27) to permit use of ( in 2.6.28) ( in 4.5, not fixed by this proposal) As you can see, the workarounds fixed by this proposal all have a similar flavor.\nI am Red Hat's maintainer for Rust on RHEL -- thanks for the cc. I try to keep an eye on new issues, but this one slipped past me. Red Hat only ships the Rust toolchain to customers for RHEL 7 and RHEL 8. If our customers would like to use Rust on older RHEL, they can do so via , and we'll support them in the same way we would for any other third-party software. Internally we do also build and use on RHEL 6, mostly because it's needed to ship Firefox updates. This is where it gets a little hairy, because each RHEL N is bootstrapped on RHEL N-1 and often kept that way -- meaning a lot of our RHEL 6 builders are still running on a RHEL 5 kernel. I would have to apply local workarounds if upstream Rust loses the ability to run on 2.6.18 kernels. We prefer to keep fixes upstream as much as possible, both for open sharing and to avoid bitrot. Is there much of a benefit to removing the current workarounds? I agree all of that code is annoying and cumbersome, but it's already written, and as far as I know it hasn't been a maintenance burden. Do you see otherwise? If there are any known issues with such Linux compatibility code, I am definitely willing to take assignment for fixing them.\n(Not a Rust maintainer, but I'm a Rust user and maintain a lot of other OSS software so I have well-developed feelings around supporting Very Old Software :-)) Do you have any sense of on what timeline Rust would be able to drop support for 2.6.18 kernels without causing you pain? In general I don't think people mind supporting configurations that have users and are painful to work around, but needing to support them for forever is a bitter pill to swallow! Particularly as they get harder and harder to test over time (already I have no idea how to test on really old kernels besides building it myself). So if there was an estimate \"we'd like to be able to support this until Q2 2020\", even if it's not set in stone, I think that would be very helpful!\nThe other benefit would be that Rust wouldn't have to use CentOS 5 for CI, which means we don't have to patch LLVM (and Emscripten LLVM) to compile on those systems. Of course that's also fairly limited in scope.\nAs soon as we stop shipping Firefox and Thunderbird updates on RHEL 6, I won't need Rust there anymore. AFAIK this does correspond to the end of Maintenance Support, November 30, 2020. Then I'll be on to RHEL 7 builders as my minimum, probably still with some RHEL 6 2.6.32 kernels involved. It should be fine for me if we update CI to CentOS 6. This is mostly concerned with how the release binaries are linked to GLIBC symbol versions, which is all in userspace. It's a broader community question whether any other Rust users still care about running on RHEL or CentOS 5. (Small caveat - glibc support for a symbol can still return , .)\nI noticed that Red Hat also has Extended Life Cycle Support (ELS) for RHEL 6 until June 30, 2024. Will you need RHEL 5 to work during this time? I don't know how ELS works with Firefox updates. Also, is there any reason RHEL 4 support wasn't an issue prior to March 31, 2017 (while RHEL 5 was still normally supported)? This issue came up for me when dealing with opening files in , see for more info. No single RHEL 5 issue is that bad, it's mainly just the sum of a bunch of tiny issues.\nI'm not on that team, but AFAIK we don't ship Firefox updates for ELS. The last build I see for RHEL 5 was . Maybe there could be an exception for a severe security issue, but I really doubt we'd rebase to newer Firefox for that, which means Rust requirements won't change. New Firefox ESR versions do require a newer Rust toolchain too, which is generally why we have to keep up. Otherwise we could just freeze some older compat rustc while upstream moves on. Rust wasn't required until Firefox 52.\nthat makes perfect sense to me, thanks for clarifying. So the proposed policy would be: Support RHEL until RHEL is retired (i.e. ends normal support). This would mean: Supporting RHEL 5 (v2.6.18) until November 30, 2020 Supporting RHEL 6 (v2.6.32) until June 30, 2024 Supporting RHEL 7 (v3.10) until May, 2029 This is a much longer support tail than any other Linux distros (that I know of), so it would also be the effective minimum kernel version of , , and their dependencies. How does that sound to people? EDIT: The alternative to this would be Support RHEL until it is retired, taking the table above and incrementing the RHEL versions by 1. It would also mean being able to drop RHEL 5 support now.\nSure, that's ideal for me. :smile: As mentioned, is just needed for kernel support, as far as I'm concerned. Userspace concerns like GLIBC symbol versions can stick to currently supported for my needs.\nRed Hat should backport the features we need (CLOEXEC and getrandom, in particular) to whatever kernels that it wants Rust to support. I don't know any good reason why they don't do so, other than it's cheaper to convince the whole world to keep supporting older kernel versions than it is to do the backports. We should change that dynamic.\nThe alternative to this would be Support RHEL N until it is retired I think we should not officially support retired OS versions (well, maybe with some grace period), so I prefer this option. Supporting 14 year old kernels seems a bit too much to me. each RHEL N is bootstrapped on RHEL N-1 and often kept that way -- meaning a lot of our RHEL 6 builders are still running on a RHEL 5 kernel Is it possible to apply kernel patches to those builders to add support for functionality like CLOEXEC? Or build Firefox separately on RHEL 6?\nI think it won't be fruitful for us to debate the merits of stable enterprise kernels, but no, this is not a possibility. An alternative take is \"Red Hat should be responsible for the work in Rust to maintain the support they want\" -- here I am, ready and willing. I definitely can't change those kernels. The ones that are stuck on N-1 are precisely to avoid rocking the boat, and backporting features is a big change. Some of our arches do eventually update the builders to their matching N kernel, but some don't, and I don't know all the reasons. If the workarounds are removed, then I will have to reapply them in our own builds. This assumes I keep using only our own binaries -- if I ever have to re-bootstrap from upstream binaries for stage0, I would also have to use some interception (like ) to hack in the workarounds. This is all possible, but much worse than the status quo of maintaining support here.\nI think this sounds reasonable. We aren't going to be able to update these kernels, it's just a question of when we would drop support for these OSes. Especially given that we wont need RHEL 5 CI support, I think that leaving things as they are is fine. I opened this issue to verify two things: 1) That we weren't intending on supporting RHEL 5 \"forever\", and had a clear date when to drop support. 2) That someone from Red Hat actively cared about supporting these older kernels. It looks like both these things are true. We should remove RHEL 5 workarounds after November 30, 2020. This issue can be postponed until then. EDIT: For our use case in , we were able to add compatibility for RHEL 5 .\nSee for a recent case where supporting such ancient kernels required extra manual work ( to the 2 lines mentioned by above). I am not sure how often that comes up, but probably the hardest part is to even notice that it happens -- and even then those codepaths will likely linger untested.\nYes, but this will still be true in general. If the CI kernel is newer than whatever kernel baseline we choose, it will be possible for newer syscalls to pass undetected. I don't know if we have any control over that -- can we choose a particular VM image? So that leaves us with a stated policy of support. The more that developers are aware of such issues, the better, but I'm the one most likely to notice when my build actually fails. If I come back with a fix, I at least need the project to be receptive. I haven't been in the habit of building nightly on our RHEL6 builders, but maybe I should! Catching this early is better for everyone involved...\nThat seems reasonable. How quick is the Rust related bootstrapping? Could it be run automatically (or more frequently)? This is an interesting point. If I hadn't made that change, would have kept working on RHEL 5, it just would have leaked an FD. I'm not sure how we would even test for this sort of thing. This isn't a RHEL 5 specific concern, I just don't know how Rust tests these types of resource leak issues in general.\nAbout 2 hours. Automation is harder, since I believe our build system needs my credentials, but I'll look into what is possible here. Yeah, that's a lot more subtle than an !", "positive_passages": [{"docid": "doc-en-rust-147951eaa9bd7cfba78e9f8c8d45cd93951bc1476e28afc0795f5c7e403b33d4", "text": "set -ex source shared.sh curl https://cmake.org/files/v3.6/cmake-3.6.3.tar.gz | tar xzf - CMAKE=3.13.4 curl -L https://github.com/Kitware/CMake/releases/download/v$CMAKE/cmake-$CMAKE.tar.gz | tar xzf - mkdir cmake-build cd cmake-build hide_output ../cmake-3.6.3/configure --prefix=/rustroot hide_output ../cmake-$CMAKE/configure --prefix=/rustroot hide_output make -j10 hide_output make install cd .. rm -rf cmake-build rm -rf cmake-3.6.3 rm -rf cmake-$CMAKE ", "commid": "rust_pr_74163"}], "negative_passages": []}
{"query_id": "q-en-rust-bf57e7b18ed8b9ab5e27682bd209117a0b094e52120110102ca62034de171272", "query": "The docs currently claim that the minimum Linux kernel version for is 2.6.18 (released September 20th, 2006). This is because RHEL 5 used that kernel version. However, RHEL 5 entered ELS on March 31, 2017. Should we continue to support RHEL 5 for , or should we increase the minimum Linux Kernel version to 2.6.27 (2nd LTS) or 2.6.32 (RHEL 6, 3rd LTS)? . Even bumping the min-version to 2.6.27 would allow us to remove most of the Linux-specific hacks in . Example: .\nGiven that RHEL is the only reason we keep comically old kernel versions around, I would propose that Rust only support RHEL until Maintenance Support ends. This is (essentially) what we already did for RHEL 4. Rust never supported RHEL 4, and back when RHEL 5 still had maintenance support. It would be nice to get someone from Red Hat or a RHEL customer to comment. This policy would allow us to increment the minimum kernel from 2.6.18 to 2.6.32 (and remove a lot of Linux-specific hacks). Note that RHEL has a weird (and very long) support system for RHEL 4, 5, and 6 (sources: and ). It has 5 major steps: Full Support Normal support of the OS Maintenance Support 1 Bug/Security fixes Limited hardware refresh Maintenance Support 2 Bug/Security fixes The end of this phase is considered \"Product retirement\" Extended Life Cycle Support (ELS) Additional paid product from Red Hat Gives updates for a longer period of time No additional releases/images Extended Life Phase (ELP) No more updates Limited Technical Support End date not given by Red Hat Current status of RHEL versions: RHEL 4 Not supported by Rust Currently in ELP (no end date specified) ELS ended March 31, 2017 RHEL 5 Supported by Rust Currently in ELS Maintenance Support ended March 31, 2017 ELS ends November 30, 2020 RHEL 6 Supported by Rust Currently in Maintenance Support 2 Maintenance Support ends November 30, 2020\ncc which I think they could be related.\nIt also may be worth to drop support of Windows XP and Vista (especially considering that panics are broken on XP since June 2016, see: ). Previously it was discussed . cc\nBesides , which other things would we be able to clean up?\nGood question! There are 6 workarounds in for older Linux versions (that I could find). Increasing the minimum version to 2.6.32 (aka 3rd Kernel LTS, aka RHEL 6) would fix 5 of them. Code links are inline: (mentioned above, in 2.6.23) ( in 2.6.24, there is also the mention of a bug occuring on \"some linux kernel at some point\") to atomically set the flag on the pipe fds ( in 2.6.27) ( in 2.6.27) to permit use of ( in 2.6.28) ( in 4.5, not fixed by this proposal) As you can see, the workarounds fixed by this proposal all have a similar flavor.\nI am Red Hat's maintainer for Rust on RHEL -- thanks for the cc. I try to keep an eye on new issues, but this one slipped past me. Red Hat only ships the Rust toolchain to customers for RHEL 7 and RHEL 8. If our customers would like to use Rust on older RHEL, they can do so via , and we'll support them in the same way we would for any other third-party software. Internally we do also build and use on RHEL 6, mostly because it's needed to ship Firefox updates. This is where it gets a little hairy, because each RHEL N is bootstrapped on RHEL N-1 and often kept that way -- meaning a lot of our RHEL 6 builders are still running on a RHEL 5 kernel. I would have to apply local workarounds if upstream Rust loses the ability to run on 2.6.18 kernels. We prefer to keep fixes upstream as much as possible, both for open sharing and to avoid bitrot. Is there much of a benefit to removing the current workarounds? I agree all of that code is annoying and cumbersome, but it's already written, and as far as I know it hasn't been a maintenance burden. Do you see otherwise? If there are any known issues with such Linux compatibility code, I am definitely willing to take assignment for fixing them.\n(Not a Rust maintainer, but I'm a Rust user and maintain a lot of other OSS software so I have well-developed feelings around supporting Very Old Software :-)) Do you have any sense of on what timeline Rust would be able to drop support for 2.6.18 kernels without causing you pain? In general I don't think people mind supporting configurations that have users and are painful to work around, but needing to support them for forever is a bitter pill to swallow! Particularly as they get harder and harder to test over time (already I have no idea how to test on really old kernels besides building it myself). So if there was an estimate \"we'd like to be able to support this until Q2 2020\", even if it's not set in stone, I think that would be very helpful!\nThe other benefit would be that Rust wouldn't have to use CentOS 5 for CI, which means we don't have to patch LLVM (and Emscripten LLVM) to compile on those systems. Of course that's also fairly limited in scope.\nAs soon as we stop shipping Firefox and Thunderbird updates on RHEL 6, I won't need Rust there anymore. AFAIK this does correspond to the end of Maintenance Support, November 30, 2020. Then I'll be on to RHEL 7 builders as my minimum, probably still with some RHEL 6 2.6.32 kernels involved. It should be fine for me if we update CI to CentOS 6. This is mostly concerned with how the release binaries are linked to GLIBC symbol versions, which is all in userspace. It's a broader community question whether any other Rust users still care about running on RHEL or CentOS 5. (Small caveat - glibc support for a symbol can still return , .)\nI noticed that Red Hat also has Extended Life Cycle Support (ELS) for RHEL 6 until June 30, 2024. Will you need RHEL 5 to work during this time? I don't know how ELS works with Firefox updates. Also, is there any reason RHEL 4 support wasn't an issue prior to March 31, 2017 (while RHEL 5 was still normally supported)? This issue came up for me when dealing with opening files in , see for more info. No single RHEL 5 issue is that bad, it's mainly just the sum of a bunch of tiny issues.\nI'm not on that team, but AFAIK we don't ship Firefox updates for ELS. The last build I see for RHEL 5 was . Maybe there could be an exception for a severe security issue, but I really doubt we'd rebase to newer Firefox for that, which means Rust requirements won't change. New Firefox ESR versions do require a newer Rust toolchain too, which is generally why we have to keep up. Otherwise we could just freeze some older compat rustc while upstream moves on. Rust wasn't required until Firefox 52.\nthat makes perfect sense to me, thanks for clarifying. So the proposed policy would be: Support RHEL until RHEL is retired (i.e. ends normal support). This would mean: Supporting RHEL 5 (v2.6.18) until November 30, 2020 Supporting RHEL 6 (v2.6.32) until June 30, 2024 Supporting RHEL 7 (v3.10) until May, 2029 This is a much longer support tail than any other Linux distros (that I know of), so it would also be the effective minimum kernel version of , , and their dependencies. How does that sound to people? EDIT: The alternative to this would be Support RHEL until it is retired, taking the table above and incrementing the RHEL versions by 1. It would also mean being able to drop RHEL 5 support now.\nSure, that's ideal for me. :smile: As mentioned, is just needed for kernel support, as far as I'm concerned. Userspace concerns like GLIBC symbol versions can stick to currently supported for my needs.\nRed Hat should backport the features we need (CLOEXEC and getrandom, in particular) to whatever kernels that it wants Rust to support. I don't know any good reason why they don't do so, other than it's cheaper to convince the whole world to keep supporting older kernel versions than it is to do the backports. We should change that dynamic.\nThe alternative to this would be Support RHEL N until it is retired I think we should not officially support retired OS versions (well, maybe with some grace period), so I prefer this option. Supporting 14 year old kernels seems a bit too much to me. each RHEL N is bootstrapped on RHEL N-1 and often kept that way -- meaning a lot of our RHEL 6 builders are still running on a RHEL 5 kernel Is it possible to apply kernel patches to those builders to add support for functionality like CLOEXEC? Or build Firefox separately on RHEL 6?\nI think it won't be fruitful for us to debate the merits of stable enterprise kernels, but no, this is not a possibility. An alternative take is \"Red Hat should be responsible for the work in Rust to maintain the support they want\" -- here I am, ready and willing. I definitely can't change those kernels. The ones that are stuck on N-1 are precisely to avoid rocking the boat, and backporting features is a big change. Some of our arches do eventually update the builders to their matching N kernel, but some don't, and I don't know all the reasons. If the workarounds are removed, then I will have to reapply them in our own builds. This assumes I keep using only our own binaries -- if I ever have to re-bootstrap from upstream binaries for stage0, I would also have to use some interception (like ) to hack in the workarounds. This is all possible, but much worse than the status quo of maintaining support here.\nI think this sounds reasonable. We aren't going to be able to update these kernels, it's just a question of when we would drop support for these OSes. Especially given that we wont need RHEL 5 CI support, I think that leaving things as they are is fine. I opened this issue to verify two things: 1) That we weren't intending on supporting RHEL 5 \"forever\", and had a clear date when to drop support. 2) That someone from Red Hat actively cared about supporting these older kernels. It looks like both these things are true. We should remove RHEL 5 workarounds after November 30, 2020. This issue can be postponed until then. EDIT: For our use case in , we were able to add compatibility for RHEL 5 .\nSee for a recent case where supporting such ancient kernels required extra manual work ( to the 2 lines mentioned by above). I am not sure how often that comes up, but probably the hardest part is to even notice that it happens -- and even then those codepaths will likely linger untested.\nYes, but this will still be true in general. If the CI kernel is newer than whatever kernel baseline we choose, it will be possible for newer syscalls to pass undetected. I don't know if we have any control over that -- can we choose a particular VM image? So that leaves us with a stated policy of support. The more that developers are aware of such issues, the better, but I'm the one most likely to notice when my build actually fails. If I come back with a fix, I at least need the project to be receptive. I haven't been in the habit of building nightly on our RHEL6 builders, but maybe I should! Catching this early is better for everyone involved...\nThat seems reasonable. How quick is the Rust related bootstrapping? Could it be run automatically (or more frequently)? This is an interesting point. If I hadn't made that change, would have kept working on RHEL 5, it just would have leaked an FD. I'm not sure how we would even test for this sort of thing. This isn't a RHEL 5 specific concern, I just don't know how Rust tests these types of resource leak issues in general.\nAbout 2 hours. Automation is harder, since I believe our build system needs my credentials, but I'll look into what is possible here. Yeah, that's a lot more subtle than an !", "positive_passages": [{"docid": "doc-en-rust-7f2daa7f707e5e1ba91f4cad7dffef675dc62e149b88dab1724e7df6a36bb7b3", "text": "cd .. rm -rf curl-build rm -rf curl-$VERSION yum erase -y curl ", "commid": "rust_pr_74163"}], "negative_passages": []}
{"query_id": "q-en-rust-bf57e7b18ed8b9ab5e27682bd209117a0b094e52120110102ca62034de171272", "query": "The docs currently claim that the minimum Linux kernel version for is 2.6.18 (released September 20th, 2006). This is because RHEL 5 used that kernel version. However, RHEL 5 entered ELS on March 31, 2017. Should we continue to support RHEL 5 for , or should we increase the minimum Linux Kernel version to 2.6.27 (2nd LTS) or 2.6.32 (RHEL 6, 3rd LTS)? . Even bumping the min-version to 2.6.27 would allow us to remove most of the Linux-specific hacks in . Example: .\nGiven that RHEL is the only reason we keep comically old kernel versions around, I would propose that Rust only support RHEL until Maintenance Support ends. This is (essentially) what we already did for RHEL 4. Rust never supported RHEL 4, and back when RHEL 5 still had maintenance support. It would be nice to get someone from Red Hat or a RHEL customer to comment. This policy would allow us to increment the minimum kernel from 2.6.18 to 2.6.32 (and remove a lot of Linux-specific hacks). Note that RHEL has a weird (and very long) support system for RHEL 4, 5, and 6 (sources: and ). It has 5 major steps: Full Support Normal support of the OS Maintenance Support 1 Bug/Security fixes Limited hardware refresh Maintenance Support 2 Bug/Security fixes The end of this phase is considered \"Product retirement\" Extended Life Cycle Support (ELS) Additional paid product from Red Hat Gives updates for a longer period of time No additional releases/images Extended Life Phase (ELP) No more updates Limited Technical Support End date not given by Red Hat Current status of RHEL versions: RHEL 4 Not supported by Rust Currently in ELP (no end date specified) ELS ended March 31, 2017 RHEL 5 Supported by Rust Currently in ELS Maintenance Support ended March 31, 2017 ELS ends November 30, 2020 RHEL 6 Supported by Rust Currently in Maintenance Support 2 Maintenance Support ends November 30, 2020\ncc which I think they could be related.\nIt also may be worth to drop support of Windows XP and Vista (especially considering that panics are broken on XP since June 2016, see: ). Previously it was discussed . cc\nBesides , which other things would we be able to clean up?\nGood question! There are 6 workarounds in for older Linux versions (that I could find). Increasing the minimum version to 2.6.32 (aka 3rd Kernel LTS, aka RHEL 6) would fix 5 of them. Code links are inline: (mentioned above, in 2.6.23) ( in 2.6.24, there is also the mention of a bug occuring on \"some linux kernel at some point\") to atomically set the flag on the pipe fds ( in 2.6.27) ( in 2.6.27) to permit use of ( in 2.6.28) ( in 4.5, not fixed by this proposal) As you can see, the workarounds fixed by this proposal all have a similar flavor.\nI am Red Hat's maintainer for Rust on RHEL -- thanks for the cc. I try to keep an eye on new issues, but this one slipped past me. Red Hat only ships the Rust toolchain to customers for RHEL 7 and RHEL 8. If our customers would like to use Rust on older RHEL, they can do so via , and we'll support them in the same way we would for any other third-party software. Internally we do also build and use on RHEL 6, mostly because it's needed to ship Firefox updates. This is where it gets a little hairy, because each RHEL N is bootstrapped on RHEL N-1 and often kept that way -- meaning a lot of our RHEL 6 builders are still running on a RHEL 5 kernel. I would have to apply local workarounds if upstream Rust loses the ability to run on 2.6.18 kernels. We prefer to keep fixes upstream as much as possible, both for open sharing and to avoid bitrot. Is there much of a benefit to removing the current workarounds? I agree all of that code is annoying and cumbersome, but it's already written, and as far as I know it hasn't been a maintenance burden. Do you see otherwise? If there are any known issues with such Linux compatibility code, I am definitely willing to take assignment for fixing them.\n(Not a Rust maintainer, but I'm a Rust user and maintain a lot of other OSS software so I have well-developed feelings around supporting Very Old Software :-)) Do you have any sense of on what timeline Rust would be able to drop support for 2.6.18 kernels without causing you pain? In general I don't think people mind supporting configurations that have users and are painful to work around, but needing to support them for forever is a bitter pill to swallow! Particularly as they get harder and harder to test over time (already I have no idea how to test on really old kernels besides building it myself). So if there was an estimate \"we'd like to be able to support this until Q2 2020\", even if it's not set in stone, I think that would be very helpful!\nThe other benefit would be that Rust wouldn't have to use CentOS 5 for CI, which means we don't have to patch LLVM (and Emscripten LLVM) to compile on those systems. Of course that's also fairly limited in scope.\nAs soon as we stop shipping Firefox and Thunderbird updates on RHEL 6, I won't need Rust there anymore. AFAIK this does correspond to the end of Maintenance Support, November 30, 2020. Then I'll be on to RHEL 7 builders as my minimum, probably still with some RHEL 6 2.6.32 kernels involved. It should be fine for me if we update CI to CentOS 6. This is mostly concerned with how the release binaries are linked to GLIBC symbol versions, which is all in userspace. It's a broader community question whether any other Rust users still care about running on RHEL or CentOS 5. (Small caveat - glibc support for a symbol can still return , .)\nI noticed that Red Hat also has Extended Life Cycle Support (ELS) for RHEL 6 until June 30, 2024. Will you need RHEL 5 to work during this time? I don't know how ELS works with Firefox updates. Also, is there any reason RHEL 4 support wasn't an issue prior to March 31, 2017 (while RHEL 5 was still normally supported)? This issue came up for me when dealing with opening files in , see for more info. No single RHEL 5 issue is that bad, it's mainly just the sum of a bunch of tiny issues.\nI'm not on that team, but AFAIK we don't ship Firefox updates for ELS. The last build I see for RHEL 5 was . Maybe there could be an exception for a severe security issue, but I really doubt we'd rebase to newer Firefox for that, which means Rust requirements won't change. New Firefox ESR versions do require a newer Rust toolchain too, which is generally why we have to keep up. Otherwise we could just freeze some older compat rustc while upstream moves on. Rust wasn't required until Firefox 52.\nthat makes perfect sense to me, thanks for clarifying. So the proposed policy would be: Support RHEL until RHEL is retired (i.e. ends normal support). This would mean: Supporting RHEL 5 (v2.6.18) until November 30, 2020 Supporting RHEL 6 (v2.6.32) until June 30, 2024 Supporting RHEL 7 (v3.10) until May, 2029 This is a much longer support tail than any other Linux distros (that I know of), so it would also be the effective minimum kernel version of , , and their dependencies. How does that sound to people? EDIT: The alternative to this would be Support RHEL until it is retired, taking the table above and incrementing the RHEL versions by 1. It would also mean being able to drop RHEL 5 support now.\nSure, that's ideal for me. :smile: As mentioned, is just needed for kernel support, as far as I'm concerned. Userspace concerns like GLIBC symbol versions can stick to currently supported for my needs.\nRed Hat should backport the features we need (CLOEXEC and getrandom, in particular) to whatever kernels that it wants Rust to support. I don't know any good reason why they don't do so, other than it's cheaper to convince the whole world to keep supporting older kernel versions than it is to do the backports. We should change that dynamic.\nThe alternative to this would be Support RHEL N until it is retired I think we should not officially support retired OS versions (well, maybe with some grace period), so I prefer this option. Supporting 14 year old kernels seems a bit too much to me. each RHEL N is bootstrapped on RHEL N-1 and often kept that way -- meaning a lot of our RHEL 6 builders are still running on a RHEL 5 kernel Is it possible to apply kernel patches to those builders to add support for functionality like CLOEXEC? Or build Firefox separately on RHEL 6?\nI think it won't be fruitful for us to debate the merits of stable enterprise kernels, but no, this is not a possibility. An alternative take is \"Red Hat should be responsible for the work in Rust to maintain the support they want\" -- here I am, ready and willing. I definitely can't change those kernels. The ones that are stuck on N-1 are precisely to avoid rocking the boat, and backporting features is a big change. Some of our arches do eventually update the builders to their matching N kernel, but some don't, and I don't know all the reasons. If the workarounds are removed, then I will have to reapply them in our own builds. This assumes I keep using only our own binaries -- if I ever have to re-bootstrap from upstream binaries for stage0, I would also have to use some interception (like ) to hack in the workarounds. This is all possible, but much worse than the status quo of maintaining support here.\nI think this sounds reasonable. We aren't going to be able to update these kernels, it's just a question of when we would drop support for these OSes. Especially given that we wont need RHEL 5 CI support, I think that leaving things as they are is fine. I opened this issue to verify two things: 1) That we weren't intending on supporting RHEL 5 \"forever\", and had a clear date when to drop support. 2) That someone from Red Hat actively cared about supporting these older kernels. It looks like both these things are true. We should remove RHEL 5 workarounds after November 30, 2020. This issue can be postponed until then. EDIT: For our use case in , we were able to add compatibility for RHEL 5 .\nSee for a recent case where supporting such ancient kernels required extra manual work ( to the 2 lines mentioned by above). I am not sure how often that comes up, but probably the hardest part is to even notice that it happens -- and even then those codepaths will likely linger untested.\nYes, but this will still be true in general. If the CI kernel is newer than whatever kernel baseline we choose, it will be possible for newer syscalls to pass undetected. I don't know if we have any control over that -- can we choose a particular VM image? So that leaves us with a stated policy of support. The more that developers are aware of such issues, the better, but I'm the one most likely to notice when my build actually fails. If I come back with a fix, I at least need the project to be receptive. I haven't been in the habit of building nightly on our RHEL6 builders, but maybe I should! Catching this early is better for everyone involved...\nThat seems reasonable. How quick is the Rust related bootstrapping? Could it be run automatically (or more frequently)? This is an interesting point. If I hadn't made that change, would have kept working on RHEL 5, it just would have leaked an FD. I'm not sure how we would even test for this sort of thing. This isn't a RHEL 5 specific concern, I just don't know how Rust tests these types of resource leak issues in general.\nAbout 2 hours. Automation is harder, since I believe our build system needs my credentials, but I'll look into what is possible here. Yeah, that's a lot more subtle than an !", "positive_passages": [{"docid": "doc-en-rust-803836717353ca6b73f111855c5accd5b4b473766988ba4367ddf95d70ed2ec7", "text": "cd .. rm -rf gcc-build rm -rf gcc-$GCC yum erase -y gcc gcc-c++ binutils ", "commid": "rust_pr_74163"}], "negative_passages": []}
{"query_id": "q-en-rust-bf57e7b18ed8b9ab5e27682bd209117a0b094e52120110102ca62034de171272", "query": "The docs currently claim that the minimum Linux kernel version for is 2.6.18 (released September 20th, 2006). This is because RHEL 5 used that kernel version. However, RHEL 5 entered ELS on March 31, 2017. Should we continue to support RHEL 5 for , or should we increase the minimum Linux Kernel version to 2.6.27 (2nd LTS) or 2.6.32 (RHEL 6, 3rd LTS)? . Even bumping the min-version to 2.6.27 would allow us to remove most of the Linux-specific hacks in . Example: .\nGiven that RHEL is the only reason we keep comically old kernel versions around, I would propose that Rust only support RHEL until Maintenance Support ends. This is (essentially) what we already did for RHEL 4. Rust never supported RHEL 4, and back when RHEL 5 still had maintenance support. It would be nice to get someone from Red Hat or a RHEL customer to comment. This policy would allow us to increment the minimum kernel from 2.6.18 to 2.6.32 (and remove a lot of Linux-specific hacks). Note that RHEL has a weird (and very long) support system for RHEL 4, 5, and 6 (sources: and ). It has 5 major steps: Full Support Normal support of the OS Maintenance Support 1 Bug/Security fixes Limited hardware refresh Maintenance Support 2 Bug/Security fixes The end of this phase is considered \"Product retirement\" Extended Life Cycle Support (ELS) Additional paid product from Red Hat Gives updates for a longer period of time No additional releases/images Extended Life Phase (ELP) No more updates Limited Technical Support End date not given by Red Hat Current status of RHEL versions: RHEL 4 Not supported by Rust Currently in ELP (no end date specified) ELS ended March 31, 2017 RHEL 5 Supported by Rust Currently in ELS Maintenance Support ended March 31, 2017 ELS ends November 30, 2020 RHEL 6 Supported by Rust Currently in Maintenance Support 2 Maintenance Support ends November 30, 2020\ncc which I think they could be related.\nIt also may be worth to drop support of Windows XP and Vista (especially considering that panics are broken on XP since June 2016, see: ). Previously it was discussed . cc\nBesides , which other things would we be able to clean up?\nGood question! There are 6 workarounds in for older Linux versions (that I could find). Increasing the minimum version to 2.6.32 (aka 3rd Kernel LTS, aka RHEL 6) would fix 5 of them. Code links are inline: (mentioned above, in 2.6.23) ( in 2.6.24, there is also the mention of a bug occuring on \"some linux kernel at some point\") to atomically set the flag on the pipe fds ( in 2.6.27) ( in 2.6.27) to permit use of ( in 2.6.28) ( in 4.5, not fixed by this proposal) As you can see, the workarounds fixed by this proposal all have a similar flavor.\nI am Red Hat's maintainer for Rust on RHEL -- thanks for the cc. I try to keep an eye on new issues, but this one slipped past me. Red Hat only ships the Rust toolchain to customers for RHEL 7 and RHEL 8. If our customers would like to use Rust on older RHEL, they can do so via , and we'll support them in the same way we would for any other third-party software. Internally we do also build and use on RHEL 6, mostly because it's needed to ship Firefox updates. This is where it gets a little hairy, because each RHEL N is bootstrapped on RHEL N-1 and often kept that way -- meaning a lot of our RHEL 6 builders are still running on a RHEL 5 kernel. I would have to apply local workarounds if upstream Rust loses the ability to run on 2.6.18 kernels. We prefer to keep fixes upstream as much as possible, both for open sharing and to avoid bitrot. Is there much of a benefit to removing the current workarounds? I agree all of that code is annoying and cumbersome, but it's already written, and as far as I know it hasn't been a maintenance burden. Do you see otherwise? If there are any known issues with such Linux compatibility code, I am definitely willing to take assignment for fixing them.\n(Not a Rust maintainer, but I'm a Rust user and maintain a lot of other OSS software so I have well-developed feelings around supporting Very Old Software :-)) Do you have any sense of on what timeline Rust would be able to drop support for 2.6.18 kernels without causing you pain? In general I don't think people mind supporting configurations that have users and are painful to work around, but needing to support them for forever is a bitter pill to swallow! Particularly as they get harder and harder to test over time (already I have no idea how to test on really old kernels besides building it myself). So if there was an estimate \"we'd like to be able to support this until Q2 2020\", even if it's not set in stone, I think that would be very helpful!\nThe other benefit would be that Rust wouldn't have to use CentOS 5 for CI, which means we don't have to patch LLVM (and Emscripten LLVM) to compile on those systems. Of course that's also fairly limited in scope.\nAs soon as we stop shipping Firefox and Thunderbird updates on RHEL 6, I won't need Rust there anymore. AFAIK this does correspond to the end of Maintenance Support, November 30, 2020. Then I'll be on to RHEL 7 builders as my minimum, probably still with some RHEL 6 2.6.32 kernels involved. It should be fine for me if we update CI to CentOS 6. This is mostly concerned with how the release binaries are linked to GLIBC symbol versions, which is all in userspace. It's a broader community question whether any other Rust users still care about running on RHEL or CentOS 5. (Small caveat - glibc support for a symbol can still return , .)\nI noticed that Red Hat also has Extended Life Cycle Support (ELS) for RHEL 6 until June 30, 2024. Will you need RHEL 5 to work during this time? I don't know how ELS works with Firefox updates. Also, is there any reason RHEL 4 support wasn't an issue prior to March 31, 2017 (while RHEL 5 was still normally supported)? This issue came up for me when dealing with opening files in , see for more info. No single RHEL 5 issue is that bad, it's mainly just the sum of a bunch of tiny issues.\nI'm not on that team, but AFAIK we don't ship Firefox updates for ELS. The last build I see for RHEL 5 was . Maybe there could be an exception for a severe security issue, but I really doubt we'd rebase to newer Firefox for that, which means Rust requirements won't change. New Firefox ESR versions do require a newer Rust toolchain too, which is generally why we have to keep up. Otherwise we could just freeze some older compat rustc while upstream moves on. Rust wasn't required until Firefox 52.\nthat makes perfect sense to me, thanks for clarifying. So the proposed policy would be: Support RHEL until RHEL is retired (i.e. ends normal support). This would mean: Supporting RHEL 5 (v2.6.18) until November 30, 2020 Supporting RHEL 6 (v2.6.32) until June 30, 2024 Supporting RHEL 7 (v3.10) until May, 2029 This is a much longer support tail than any other Linux distros (that I know of), so it would also be the effective minimum kernel version of , , and their dependencies. How does that sound to people? EDIT: The alternative to this would be Support RHEL until it is retired, taking the table above and incrementing the RHEL versions by 1. It would also mean being able to drop RHEL 5 support now.\nSure, that's ideal for me. :smile: As mentioned, is just needed for kernel support, as far as I'm concerned. Userspace concerns like GLIBC symbol versions can stick to currently supported for my needs.\nRed Hat should backport the features we need (CLOEXEC and getrandom, in particular) to whatever kernels that it wants Rust to support. I don't know any good reason why they don't do so, other than it's cheaper to convince the whole world to keep supporting older kernel versions than it is to do the backports. We should change that dynamic.\nThe alternative to this would be Support RHEL N until it is retired I think we should not officially support retired OS versions (well, maybe with some grace period), so I prefer this option. Supporting 14 year old kernels seems a bit too much to me. each RHEL N is bootstrapped on RHEL N-1 and often kept that way -- meaning a lot of our RHEL 6 builders are still running on a RHEL 5 kernel Is it possible to apply kernel patches to those builders to add support for functionality like CLOEXEC? Or build Firefox separately on RHEL 6?\nI think it won't be fruitful for us to debate the merits of stable enterprise kernels, but no, this is not a possibility. An alternative take is \"Red Hat should be responsible for the work in Rust to maintain the support they want\" -- here I am, ready and willing. I definitely can't change those kernels. The ones that are stuck on N-1 are precisely to avoid rocking the boat, and backporting features is a big change. Some of our arches do eventually update the builders to their matching N kernel, but some don't, and I don't know all the reasons. If the workarounds are removed, then I will have to reapply them in our own builds. This assumes I keep using only our own binaries -- if I ever have to re-bootstrap from upstream binaries for stage0, I would also have to use some interception (like ) to hack in the workarounds. This is all possible, but much worse than the status quo of maintaining support here.\nI think this sounds reasonable. We aren't going to be able to update these kernels, it's just a question of when we would drop support for these OSes. Especially given that we wont need RHEL 5 CI support, I think that leaving things as they are is fine. I opened this issue to verify two things: 1) That we weren't intending on supporting RHEL 5 \"forever\", and had a clear date when to drop support. 2) That someone from Red Hat actively cared about supporting these older kernels. It looks like both these things are true. We should remove RHEL 5 workarounds after November 30, 2020. This issue can be postponed until then. EDIT: For our use case in , we were able to add compatibility for RHEL 5 .\nSee for a recent case where supporting such ancient kernels required extra manual work ( to the 2 lines mentioned by above). I am not sure how often that comes up, but probably the hardest part is to even notice that it happens -- and even then those codepaths will likely linger untested.\nYes, but this will still be true in general. If the CI kernel is newer than whatever kernel baseline we choose, it will be possible for newer syscalls to pass undetected. I don't know if we have any control over that -- can we choose a particular VM image? So that leaves us with a stated policy of support. The more that developers are aware of such issues, the better, but I'm the one most likely to notice when my build actually fails. If I come back with a fix, I at least need the project to be receptive. I haven't been in the habit of building nightly on our RHEL6 builders, but maybe I should! Catching this early is better for everyone involved...\nThat seems reasonable. How quick is the Rust related bootstrapping? Could it be run automatically (or more frequently)? This is an interesting point. If I hadn't made that change, would have kept working on RHEL 5, it just would have leaked an FD. I'm not sure how we would even test for this sort of thing. This isn't a RHEL 5 specific concern, I just don't know how Rust tests these types of resource leak issues in general.\nAbout 2 hours. Automation is harder, since I believe our build system needs my credentials, but I'll look into what is possible here. Yeah, that's a lot more subtle than an !", "positive_passages": [{"docid": "doc-en-rust-a2758b9b1e22620c729fe6690090f83f342f595604fa09e3d46d6773f68ed07e", "text": "ulimit -c unlimited fi # There was a bad interaction between \"old\" 32-bit binaries on current 64-bit # kernels with selinux enabled, where ASLR mmap would sometimes choose a low # address and then block it for being below `vm.mmap_min_addr` -> `EACCES`. # This is probably a kernel bug, but setting `ulimit -Hs` works around it. # See also `dist-i686-linux` where this setting is enabled. if [ \"$SET_HARD_RLIMIT_STACK\" = \"1\" ]; then rlimit_stack=$(ulimit -Ss) if [ \"$rlimit_stack\" != \"\" ]; then ulimit -Hs \"$rlimit_stack\" fi fi ci_dir=`cd $(dirname $0) && pwd` source \"$ci_dir/shared.sh\"", "commid": "rust_pr_74163"}], "negative_passages": []}
{"query_id": "q-en-rust-057bf056c4b8388e5fa16727b81b3be344503fb40beb800f6b139d8eab71d2af", "query": "Found with the help of .\nThere's two slicing operations in . I'm trying to figure out how a non-boundary index could be appearing in there... maybe is calling with a span that begins at index 10? (where does this come from? ?) Building a debug compiler to check those log statements... Edit: Building the debug compiler was a bust. Embarassingly, things seem to have changed and I don't know how to get those debug macros to fire in the current compiler.\ndo you mean you were using and not seeing debug output? If so, that is because the environment variable name under was changed to . (I make this mistake pretty much every day due to muscle memory.) As an example, here is the tail of my log output:\ntriage: P-medium. Removing nomination.\nThis was broken between 1.15 and 1.16, but those are so old and the nature of the ICE itself has changed from 1.28 to 1.29, so I do not think bisection would be worthwhile.\n if let Ok(Some(mut arg)) = self.parse_self_arg() { if let Some(mut arg) = self.parse_self_arg()? { arg.attrs = attrs.into(); return self.recover_bad_self_arg(arg, is_trait_item); }", "commid": "rust_pr_62668"}], "negative_passages": []}
{"query_id": "q-en-rust-b398223f8c559d92553d524d0ab191ab805fe73852ce030490fdbd945bea1363", "query": "The following code () leads to an ICE: Broken since 1.32.0 up to the current nightly, earlier versions are not affected. LastToken::Was(_) => panic!(\"our vector went away?\"), LastToken::Was(ref was) => { let msg = format!(\"our vector went away? - found Was({:?})\", was); debug!(\"collect_tokens: {}\", msg); self.sess.span_diagnostic.delay_span_bug(self.token.span, &msg); // This can happen due to a bad interaction of two unrelated recovery mechanisms // with mismatched delimiters *and* recovery lookahead on the likely typo // `pub ident(` (#62895, different but similar to the case above). return Ok((ret?, TokenStream::new(vec![]))); } }; // If we're not at EOF our current token wasn't actually consumed by", "commid": "rust_pr_62887.0"}], "negative_passages": []}
{"query_id": "q-en-rust-29abb5ef31b1fcd0d5114ac873e6d18fa269ef369679885f469477bc9aacbc42", "query": "I'm getting an internal compiler error on the following program (found by ): I'm seeing the error on stable, beta, and nightly. As with , the error occurs on () but not on (). , the proposed fix for , does not fix this error. cc", "positive_passages": [{"docid": "doc-en-rust-b18e884983d8c3d2388e4ff8230565fa2a682e9af0dfe93e21c5257476e05c5a", "text": " fn main() {} fn f() -> isize { fn f() -> isize {} pub f< //~^ ERROR missing `fn` or `struct` for function or struct definition //~| ERROR mismatched types //~ ERROR this file contains an un-closed delimiter ", "commid": "rust_pr_62887.0"}], "negative_passages": []}
{"query_id": "q-en-rust-29abb5ef31b1fcd0d5114ac873e6d18fa269ef369679885f469477bc9aacbc42", "query": "I'm getting an internal compiler error on the following program (found by ): I'm seeing the error on stable, beta, and nightly. As with , the error occurs on () but not on (). , the proposed fix for , does not fix this error. cc", "positive_passages": [{"docid": "doc-en-rust-297af6a496f40ac0241b367d2bc3d410286b5cf64a0f1d31aaa46109912b0edd", "text": " error: this file contains an un-closed delimiter --> $DIR/issue-62881.rs:6:53 | LL | fn f() -> isize { fn f() -> isize {} pub f< | - un-closed delimiter ... LL | | ^ error: missing `fn` or `struct` for function or struct definition --> $DIR/issue-62881.rs:3:41 | LL | fn f() -> isize { fn f() -> isize {} pub f< | ^ error[E0308]: mismatched types --> $DIR/issue-62881.rs:3:29 | LL | fn f() -> isize { fn f() -> isize {} pub f< | - ^^^^^ expected isize, found () | | | this function's body doesn't return | = note: expected type `isize` found type `()` error: aborting due to 3 previous errors For more information about this error, try `rustc --explain E0308`. ", "commid": "rust_pr_62887.0"}], "negative_passages": []}
{"query_id": "q-en-rust-29abb5ef31b1fcd0d5114ac873e6d18fa269ef369679885f469477bc9aacbc42", "query": "I'm getting an internal compiler error on the following program (found by ): I'm seeing the error on stable, beta, and nightly. As with , the error occurs on () but not on (). , the proposed fix for , does not fix this error. cc", "positive_passages": [{"docid": "doc-en-rust-f50bfb56418ef1acf9cd4336579b5e5f20bee9e9cb9c48d6aca7c40391f2c5fd", "text": " fn main() {} fn v() -> isize { //~ ERROR mismatched types mod _ { //~ ERROR expected identifier pub fn g() -> isizee { //~ ERROR cannot find type `isizee` in this scope mod _ { //~ ERROR expected identifier pub g() -> is //~ ERROR missing `fn` for function definition (), w20); } (), w20); //~ ERROR expected item, found `;` } ", "commid": "rust_pr_62887.0"}], "negative_passages": []}
{"query_id": "q-en-rust-29abb5ef31b1fcd0d5114ac873e6d18fa269ef369679885f469477bc9aacbc42", "query": "I'm getting an internal compiler error on the following program (found by ): I'm seeing the error on stable, beta, and nightly. As with , the error occurs on () but not on (). , the proposed fix for , does not fix this error. cc", "positive_passages": [{"docid": "doc-en-rust-5bb18e57c706e54c0ae914e460537ac34bafcebfca3d53348e24e9ce6ae9e7a7", "text": " error: expected identifier, found reserved identifier `_` --> $DIR/issue-62895.rs:4:5 | LL | mod _ { | ^ expected identifier, found reserved identifier error: expected identifier, found reserved identifier `_` --> $DIR/issue-62895.rs:6:5 | LL | mod _ { | ^ expected identifier, found reserved identifier error: missing `fn` for function definition --> $DIR/issue-62895.rs:7:4 | LL | pub g() -> is | ^^^^ help: add `fn` here to parse `g` as a public function | LL | pub fn g() -> is | ^^ error: expected item, found `;` --> $DIR/issue-62895.rs:10:9 | LL | (), w20); | ^ help: remove this semicolon error[E0412]: cannot find type `isizee` in this scope --> $DIR/issue-62895.rs:5:15 | LL | pub fn g() -> isizee { | ^^^^^^ help: a builtin type with a similar name exists: `isize` error[E0308]: mismatched types --> $DIR/issue-62895.rs:3:11 | LL | fn v() -> isize { | - ^^^^^ expected isize, found () | | | this function's body doesn't return | = note: expected type `isize` found type `()` error: aborting due to 6 previous errors Some errors have detailed explanations: E0308, E0412. For more information about an error, try `rustc --explain E0308`. ", "commid": "rust_pr_62887.0"}], "negative_passages": []}
{"query_id": "q-en-rust-722e1db4196169a0b89fb76519c81af5d11cff2d3152604fc46782a78ffa4cf2", "query": "This fails with: I believe this should compile because: non-async function with the same signature does compile; insignificant tweaks, like adding a wrapper struct, make it compile. Mentioning who fixed a similar error message in . Mentioning who touched this error message in and . rustc 1.38.0-nightly ( 2019-07-26) $DIR/min-choice-reject-ambiguous.rs:17:5 | LL | type_test::<'_, T>() // This should pass if we pick 'b. | ^^^^^^^^^^^^^^^^^^ ...so that the type `T` will meet its required lifetime bounds | help: consider adding an explicit lifetime bound... | LL | T: 'b + 'a, | ++++ error[E0309]: the parameter type `T` may not live long enough --> $DIR/min-choice-reject-ambiguous.rs:28:5 | LL | type_test::<'_, T>() // This should pass if we pick 'c. | ^^^^^^^^^^^^^^^^^^ ...so that the type `T` will meet its required lifetime bounds | help: consider adding an explicit lifetime bound... | LL | T: 'c + 'a, | ++++ error[E0700]: hidden type for `impl Cap<'b> + Cap<'c>` captures lifetime that does not appear in bounds --> $DIR/min-choice-reject-ambiguous.rs:39:5 | LL | fn test_ambiguous<'a, 'b, 'c>(s: &'a u8) -> impl Cap<'b> + Cap<'c> | -- hidden type `&'a u8` captures the lifetime `'a` as defined here ... LL | s | ^ | help: to declare that `impl Cap<'b> + Cap<'c>` captures `'a`, you can add an explicit `'a` lifetime bound | LL | fn test_ambiguous<'a, 'b, 'c>(s: &'a u8) -> impl Cap<'b> + Cap<'c> + 'a | ++++ error: aborting due to 3 previous errors Some errors have detailed explanations: E0309, E0700. For more information about an error, try `rustc --explain E0309`. ", "commid": "rust_pr_105300.0"}], "negative_passages": []}
{"query_id": "q-en-rust-722e1db4196169a0b89fb76519c81af5d11cff2d3152604fc46782a78ffa4cf2", "query": "This fails with: I believe this should compile because: non-async function with the same signature does compile; insignificant tweaks, like adding a wrapper struct, make it compile. Mentioning who fixed a similar error message in . Mentioning who touched this error message in and . rustc 1.38.0-nightly ( 2019-07-26) $DIR/nested-impl-trait-fail.rs:17:5 | LL | fn fail_early_bound<'s, 'a, 'b>(a: &'s u8) -> impl IntoIterator | ^^^^^^^ | ^^^^ warning: unused variable: `x` --> $DIR/never-assign-dead-code.rs:9:9", "commid": "rust_pr_64229.0"}], "negative_passages": []}
{"query_id": "q-en-rust-2fb9fa6469da854d8149cb563726f531120bd823a98098e8e8653f0da72a17ee", "query": "rustc warns that the entire line is an unreachable expression however the code is clearly executed as can be see If the line was truly unreachable, there would be no panic... This bug appears on stable, beta and nightly\nThe is the unreachable part, so perhaps the error should be rephrased and pointed at that function call only\nHmm, so the span is wrong?\nthe order of execution is panic! and then everything else, so in some sense everything but the panic is unreachable. I'm not sure what a good span for that would be.\nThe method receiver is evaluated first, so the and calls still happen. The span can point just at the .\nAh, hm, I didn't expect that to be the case, but it makes sense. Then this seems relatively straightforward as to what the expected output is at least.\nI feel like this should also add a note that the panic is called before everything else. It could also suggest but that seems more like a clippy lint.\nSimilar case: Maybe we can simply add a note that won't be called because evaluation of its arguments leaves the current code path. I agree that suggestion is indeed too specific and may be better suited to clippy. If this is acceptable, I would like to pick it up! As long as it's fine for me to progress more slowly than normal\nGitHub wasn't clever enough to read that the PR said \"might close\" instead of \"close.\" The PR seems to imply more work can be done on this.", "positive_passages": [{"docid": "doc-en-rust-e29d1e113ab3d9a8f550edde2789ae7f7c5c77b872b47c1f65d5d4019ce6b57a", "text": "LL | #![deny(unreachable_code)] | ^^^^^^^^^^^^^^^^ error: unreachable expression error: unreachable call --> $DIR/expr_call.rs:18:5 | LL | bar(return); | ^^^^^^^^^^^ | ^^^ error: aborting due to 2 previous errors", "commid": "rust_pr_64229.0"}], "negative_passages": []}
{"query_id": "q-en-rust-2fb9fa6469da854d8149cb563726f531120bd823a98098e8e8653f0da72a17ee", "query": "rustc warns that the entire line is an unreachable expression however the code is clearly executed as can be see If the line was truly unreachable, there would be no panic... This bug appears on stable, beta and nightly\nThe is the unreachable part, so perhaps the error should be rephrased and pointed at that function call only\nHmm, so the span is wrong?\nthe order of execution is panic! and then everything else, so in some sense everything but the panic is unreachable. I'm not sure what a good span for that would be.\nThe method receiver is evaluated first, so the and calls still happen. The span can point just at the .\nAh, hm, I didn't expect that to be the case, but it makes sense. Then this seems relatively straightforward as to what the expected output is at least.\nI feel like this should also add a note that the panic is called before everything else. It could also suggest but that seems more like a clippy lint.\nSimilar case: Maybe we can simply add a note that won't be called because evaluation of its arguments leaves the current code path. I agree that suggestion is indeed too specific and may be better suited to clippy. If this is acceptable, I would like to pick it up! As long as it's fine for me to progress more slowly than normal\nGitHub wasn't clever enough to read that the PR said \"might close\" instead of \"close.\" The PR seems to imply more work can be done on this.", "positive_passages": [{"docid": "doc-en-rust-933b9225efc0e63a787f41ecd163e29614f96103b01e94685a82c6b61b1d260f", "text": "LL | #![deny(unreachable_code)] | ^^^^^^^^^^^^^^^^ error: unreachable expression --> $DIR/expr_method.rs:21:5 error: unreachable call --> $DIR/expr_method.rs:21:9 | LL | Foo.bar(return); | ^^^^^^^^^^^^^^^ | ^^^ error: aborting due to 2 previous errors", "commid": "rust_pr_64229.0"}], "negative_passages": []}
{"query_id": "q-en-rust-2fb9fa6469da854d8149cb563726f531120bd823a98098e8e8653f0da72a17ee", "query": "rustc warns that the entire line is an unreachable expression however the code is clearly executed as can be see If the line was truly unreachable, there would be no panic... This bug appears on stable, beta and nightly\nThe is the unreachable part, so perhaps the error should be rephrased and pointed at that function call only\nHmm, so the span is wrong?\nthe order of execution is panic! and then everything else, so in some sense everything but the panic is unreachable. I'm not sure what a good span for that would be.\nThe method receiver is evaluated first, so the and calls still happen. The span can point just at the .\nAh, hm, I didn't expect that to be the case, but it makes sense. Then this seems relatively straightforward as to what the expected output is at least.\nI feel like this should also add a note that the panic is called before everything else. It could also suggest but that seems more like a clippy lint.\nSimilar case: Maybe we can simply add a note that won't be called because evaluation of its arguments leaves the current code path. I agree that suggestion is indeed too specific and may be better suited to clippy. If this is acceptable, I would like to pick it up! As long as it's fine for me to progress more slowly than normal\nGitHub wasn't clever enough to read that the PR said \"might close\" instead of \"close.\" The PR seems to imply more work can be done on this.", "positive_passages": [{"docid": "doc-en-rust-94f1bf0d0604a90d685d242863a9e8fb530539a5cabfe110ada13306ebe2c8ee", "text": "get_u8()); //~ ERROR unreachable expression } fn diverge_second() { call( //~ ERROR unreachable expression call( //~ ERROR unreachable call get_u8(), diverge()); }", "commid": "rust_pr_64229.0"}], "negative_passages": []}
{"query_id": "q-en-rust-2fb9fa6469da854d8149cb563726f531120bd823a98098e8e8653f0da72a17ee", "query": "rustc warns that the entire line is an unreachable expression however the code is clearly executed as can be see If the line was truly unreachable, there would be no panic... This bug appears on stable, beta and nightly\nThe is the unreachable part, so perhaps the error should be rephrased and pointed at that function call only\nHmm, so the span is wrong?\nthe order of execution is panic! and then everything else, so in some sense everything but the panic is unreachable. I'm not sure what a good span for that would be.\nThe method receiver is evaluated first, so the and calls still happen. The span can point just at the .\nAh, hm, I didn't expect that to be the case, but it makes sense. Then this seems relatively straightforward as to what the expected output is at least.\nI feel like this should also add a note that the panic is called before everything else. It could also suggest but that seems more like a clippy lint.\nSimilar case: Maybe we can simply add a note that won't be called because evaluation of its arguments leaves the current code path. I agree that suggestion is indeed too specific and may be better suited to clippy. If this is acceptable, I would like to pick it up! As long as it's fine for me to progress more slowly than normal\nGitHub wasn't clever enough to read that the PR said \"might close\" instead of \"close.\" The PR seems to imply more work can be done on this.", "positive_passages": [{"docid": "doc-en-rust-2a93dd5fd8cacc966c9cee4f3dde5081d83faf170c362c5d5604ed7e0a43e4bf", "text": "LL | #![deny(unreachable_code)] | ^^^^^^^^^^^^^^^^ error: unreachable expression error: unreachable call --> $DIR/unreachable-in-call.rs:17:5 | LL | / call( LL | | get_u8(), LL | | diverge()); | |__________________^ LL | call( | ^^^^ error: aborting due to 2 previous errors", "commid": "rust_pr_64229.0"}], "negative_passages": []}
{"query_id": "q-en-rust-39d5c4547e08ffd8f1589ccb012434bfb30da4436d533b9615a437af1b51d94e", "query": "Because is the 's default for application-level logging and should not affect the output of the tools.\neh, it's only recently we've shifted away from using it for tools -- but seems like the trend is definitely to do so. I've filed", "positive_passages": [{"docid": "doc-en-rust-ddcbce222ab4254d3ba333480e78d37ae91ebe4f4f557ffbbbe5e44a92a5f1fe", "text": "32_000_000 // 32MB on other platforms }; rustc_driver::set_sigpipe_handler(); env_logger::init(); env_logger::init_from_env(\"RUSTDOC_LOG\"); let res = std::thread::Builder::new().stack_size(thread_stack_size).spawn(move || { get_args().map(|args| main_args(&args)).unwrap_or(1) }).unwrap().join().unwrap_or(rustc_driver::EXIT_FAILURE);", "commid": "rust_pr_64329.0"}], "negative_passages": []}
{"query_id": "q-en-rust-49aa8ef8a4c00751dcefcd82fd43b142aa53459c3860af9e25ad919ff50ec42c", "query": "I encountered this as well. What project are you trying to build? My theory is that it tries to run something like (in ?) and then fails to parse the output which is a mix of and ...\nWay earlier, it runs it right at the start of a compilation to get the rustc version and enabled . Probably\nCould someone use to narrow this down to a particular commit?\nIt looks to be caused by (I would assume it is in that rollup). It is now printing the time-passes information even for .\nwould you be up for a fix here?\nI'll take a look on Monday. In the meantime, here's my best guess: I an extra entry to the output that prints the total time for the compilation. That extra line may not be indented at all, and it's possible this means it isn't ignored the same way as indented lines.\nMy theory about indentation was wrong. It was indeed a bad interaction between and , as others suggested, because they both print to . has a fix.", "positive_passages": [{"docid": "doc-en-rust-45d0fbb24e7bfa008c301977c095de37ff013766467f0d2eb1d892c150ae234d", "text": "impl Callbacks for TimePassesCallbacks { fn config(&mut self, config: &mut interface::Config) { // If a --prints=... option has been given, we don't print the \"total\" // time because it will mess up the --prints output. See #64339. self.time_passes = config.opts.debugging_opts.time_passes || config.opts.debugging_opts.time; config.opts.prints.is_empty() && (config.opts.debugging_opts.time_passes || config.opts.debugging_opts.time); } }", "commid": "rust_pr_64497.0"}], "negative_passages": []}
{"query_id": "q-en-rust-d72d9e96e93ba2b534c46a6283c2b104602764cc53d6d1b34be98bc927d23f45", "query": "Hello, I ran into a problem using lifetime with async struct methods, here is the example: struct Foo<'a{ swag: &'a i32 } impl Foo<'{ async fn bar(&self) -i32 { 1337 } } The error is: '_ Everything is fine if I use explicit lifetime like . Version: 1.39 Nighly (2019-09-19 on playground)\nThis is happening because the desugaring of uses the lifetime. This is sort of unfortunate, but seems backwards compatible to fix, so I don't think we should block on this.\nWe decided to focus on this one in the WG meeting -- you still interested in working on it?\nYes, I'm still interested in working on this. I don't think that this should be too hard to fix though if someone else wants it.", "positive_passages": [{"docid": "doc-en-rust-b9e460e2f7cb2e779a1464bed27ff203be96a2bfbea5188fc46e0a5eecd8aba1", "text": "/// header, we convert it to an in-band lifetime. fn collect_fresh_in_band_lifetime(&mut self, span: Span) -> ParamName { assert!(self.is_collecting_in_band_lifetimes); let index = self.lifetimes_to_define.len(); let index = self.lifetimes_to_define.len() + self.in_scope_lifetimes.len(); let hir_name = ParamName::Fresh(index); self.lifetimes_to_define.push((span, hir_name)); hir_name", "commid": "rust_pr_65142.0"}], "negative_passages": []}
{"query_id": "q-en-rust-d72d9e96e93ba2b534c46a6283c2b104602764cc53d6d1b34be98bc927d23f45", "query": "Hello, I ran into a problem using lifetime with async struct methods, here is the example: struct Foo<'a{ swag: &'a i32 } impl Foo<'{ async fn bar(&self) -i32 { 1337 } } The error is: '_ Everything is fine if I use explicit lifetime like . Version: 1.39 Nighly (2019-09-19 on playground)\nThis is happening because the desugaring of uses the lifetime. This is sort of unfortunate, but seems backwards compatible to fix, so I don't think we should block on this.\nWe decided to focus on this one in the WG meeting -- you still interested in working on it?\nYes, I'm still interested in working on this. I don't think that this should be too hard to fix though if someone else wants it.", "positive_passages": [{"docid": "doc-en-rust-c1f87b4a6af9fb431b397ed5d7e997a8f025880a4cbba5b68dbf1384d52b60c9", "text": " // check-pass // Check that the anonymous lifetimes used here aren't considered to shadow one // another. Note that `async fn` is different to `fn` here because the lifetimes // are numbered by HIR lowering, rather than lifetime resolution. // edition:2018 struct A<'a, 'b>(&'a &'b i32); struct B<'a>(&'a i32); impl A<'_, '_> { async fn assoc(x: &u32, y: B<'_>) { async fn nested(x: &u32, y: A<'_, '_>) {} } async fn assoc2(x: &u32, y: A<'_, '_>) { impl A<'_, '_> { async fn nested_assoc(x: &u32, y: B<'_>) {} } } } fn main() {} ", "commid": "rust_pr_65142.0"}], "negative_passages": []}
{"query_id": "q-en-rust-bef42648396272ed79e1a35ec64e2f10da9948c07ca00717f877741142cc6db4", "query": "If you have a of utf8 data there's three ways to get a : There doesn't appear to be any way to hand over the existing and have any necessary character changes into be done in place_. The type doesn't, for example, have a way to just do a lossy in-place conversion.\n- I would like to implement this (I have no contributions yet in the project). May i please claim the request?\nI think that normally the Libs team is given a chance to comment on the issue (it's only been 3 hours), but the worst that can happen is that they reject your PR. All I can say for sure is that I will not be implementing this particular patch myself.\nThanks for letting me know, I will wait until someone from the lib team will respond.\nSeems like the right API here is actually , since I believe there is never a need to allocate more bytes. Unfortunately I guess we'd need two APIs for this to be perfect, one on String as well, since there's no safe way to get from running this function to a of the bytes, right? The only safe way would be to do which is guaranteed to succeed but is costly...\n1) I don't want slice to slice. I definitely want owned to owned. I'm not sure why you'd think that slice to slice is somehow more correct. My goal is that whenever possible no additional allocations will immediately happen during the conversion (obviously this isn't always possible). 2) You definitely can have byte sequences where the lossy conversion into utf8 makes the sequence longer and thus could hit the capacity of the allocation and thus could trigger a reallocation. Trivial example: becomes , which is longer.\nHm, I had thought that the lossy conversion always replaced a single byte with a single byte, if that's not the case then the slice case is indeed not all that helpful probably. If this was true then the slice case would be good to have as a more general form, though.\nSo we want this signature: , right? That seems like it can be via a PR relatively easily.\nThat is one option. Another would be to add a method to so that you can consume that error into the lossy String. The value here is not having to start over on the utf8 parsing.\nThere was a post with possible designs on internals:\nI hope gets soon :)\nI prefer this approach for a couple of reasons. It signals whether or not the conversion was lossless. Whereas if we went with the signature , some pairs of inputs collide, like the pair and , or the pair and . Also, it will deter people from checking for the presence of to tell if something went wrong. This is a footgun I've seen in the wild. And even for users who \"just want a string\" and don't care about the error, you can still reasonably get that in single readable line:", "positive_passages": [{"docid": "doc-en-rust-74588767088f8b655322e6c35c64a42398e500c459a6917f6783e3cb3785e54a", "text": "Cow::Owned(res) } /// Converts a [`Vec \"::max_value());\", \"::max_value()); assert_eq!(\", stringify!($SelfT), \"::min_value().saturating_add(-1), \", stringify!($SelfT), \"::min_value());\", $EndFeature, \" ```\"),", "commid": "rust_pr_64943.0"}], "negative_passages": []}
{"query_id": "q-en-rust-fcbe98ea5f88f4e1815b1c9de18ab365c7de07a1ba01ba69212c0b6dc6ec3ad4", "query": "Godbolt link: So I expect these two functions has the same assembly: But the result is not:\nmodify labels: -C-bug -I-slow T-compiler\nHey I tried solving this but wasn't really sure how to. I believe the implementation to the LLVM generation is right here. Hope someone can solve this!\nTurn out I was wrong! The is working correctly (i.e. those two functions are not equivalent): I would add some doctests to show that and close this issue.", "positive_passages": [{"docid": "doc-en-rust-d540cafe3d99a996bc156d5b345801078c0b04360daf7a763e5bbc06d9a69225", "text": "} } doc_comment! { concat!(\"Saturating integer subtraction. Computes `self - rhs`, saturating at the numeric bounds instead of overflowing.", "commid": "rust_pr_64943.0"}], "negative_passages": []}
{"query_id": "q-en-rust-fcbe98ea5f88f4e1815b1c9de18ab365c7de07a1ba01ba69212c0b6dc6ec3ad4", "query": "Godbolt link: So I expect these two functions has the same assembly: But the result is not:\nmodify labels: -C-bug -I-slow T-compiler\nHey I tried solving this but wasn't really sure how to. I believe the implementation to the LLVM generation is right here. Hope someone can solve this!\nTurn out I was wrong! The is working correctly (i.e. those two functions are not equivalent): I would add some doctests to show that and close this issue.", "positive_passages": [{"docid": "doc-en-rust-08c47cd6bce9687f2c8613ac40d3db3de3b82fad0a610d51bc2189aaa1c0373e", "text": "``` \", $Feature, \"assert_eq!(100\", stringify!($SelfT), \".saturating_sub(127), -27); assert_eq!(\", stringify!($SelfT), \"::min_value().saturating_sub(100), \", stringify!($SelfT), \"::min_value());\", \"::min_value()); assert_eq!(\", stringify!($SelfT), \"::max_value().saturating_sub(-1), \", stringify!($SelfT), \"::max_value());\", $EndFeature, \" ```\"), #[stable(feature = \"rust1\", since = \"1.0.0\")]", "commid": "rust_pr_64943.0"}], "negative_passages": []}
{"query_id": "q-en-rust-535353bdfb3d3e5ccefea87e0d54068dc3ecfb7c686a60b3a012fad9cbb5afa0", "query": "The following example code yields an error: It's unclear from this error message how to proceed - I'm not sure of the syntax, location, or where I should be putting the \"GetString\" trait bound. It should be shown with a better example or link when this error is presented, as the current message is not sufficient for a resolution to be arrived at.\ncc\nFixing in . To fix your code write:\nThanks!", "positive_passages": [{"docid": "doc-en-rust-d327a68663d4bffa5b0661d90859b574aa36645fc1d080166a0c1262120befd0", "text": "} else { \"items from traits can only be used if the trait is implemented and in scope\" }); let mut msg = format!( let message = |action| format!( \"the following {traits_define} an item `{name}`, perhaps you need to {action} {one_of_them}:\", traits_define = if candidates.len() == 1 {", "commid": "rust_pr_65242.0"}], "negative_passages": []}
{"query_id": "q-en-rust-535353bdfb3d3e5ccefea87e0d54068dc3ecfb7c686a60b3a012fad9cbb5afa0", "query": "The following example code yields an error: It's unclear from this error message how to proceed - I'm not sure of the syntax, location, or where I should be putting the \"GetString\" trait bound. It should be shown with a better example or link when this error is presented, as the current message is not sufficient for a resolution to be arrived at.\ncc\nFixing in . To fix your code write:\nThanks!", "positive_passages": [{"docid": "doc-en-rust-ba833a948cf08c1014b53d2988926f7e3123fbcad7da37bfa8eed7cedc447342", "text": "} else { \"traits define\" }, action = if let Some(param) = param_type { format!(\"restrict type parameter `{}` with\", param) } else { \"implement\".to_string() }, action = action, one_of_them = if candidates.len() == 1 { \"it\" } else {", "commid": "rust_pr_65242.0"}], "negative_passages": []}
{"query_id": "q-en-rust-535353bdfb3d3e5ccefea87e0d54068dc3ecfb7c686a60b3a012fad9cbb5afa0", "query": "The following example code yields an error: It's unclear from this error message how to proceed - I'm not sure of the syntax, location, or where I should be putting the \"GetString\" trait bound. It should be shown with a better example or link when this error is presented, as the current message is not sufficient for a resolution to be arrived at.\ncc\nFixing in . To fix your code write:\nThanks!", "positive_passages": [{"docid": "doc-en-rust-15277cc96da04b9002d17734570ab6610639f7b0e928fe7eda3bd3add7d5ba31", "text": "// Get the `hir::Param` to verify whether it already has any bounds. // We do this to avoid suggesting code that ends up as `T: FooBar`, // instead we suggest `T: Foo + Bar` in that case. let mut has_bounds = None; let mut impl_trait = false; if let Node::GenericParam(ref param) = hir.get(id) { let kind = ¶m.kind; if let hir::GenericParamKind::Type { synthetic: Some(_), .. } = kind { // We've found `fn foo(x: impl Trait)` instead of // `fn foo match hir.get(id) { Node::GenericParam(ref param) => { let mut impl_trait = false; let has_bounds = if let hir::GenericParamKind::Type { synthetic: Some(_), .. } = ¶m.kind { // We've found `fn foo(x: impl Trait)` instead of // `fn foo let sp = hir.span(id); // `sp` only covers `T`, change it so that it covers `T:` when appropriate. let sp = if let Some(first_bound) = has_bounds { sp.until(first_bound.span()) } else { sp }; // FIXME: contrast `t.def_id` against `param.bounds` to not suggest traits // already there. That can happen when the cause is that we're in a const // scope or associated function used as a method. err.span_suggestions( sp, &msg[..], candidates.iter().map(|t| format!( \"{}{} {}{}\", param, if impl_trait { \" +\" } else { \":\" }, self.tcx.def_path_str(t.def_id), if has_bounds.is_some() { \" + \" } else { \"\" }, )), Applicability::MaybeIncorrect, ); suggested = true; } }; } if !suggested { let mut msg = message(if let Some(param) = param_type { format!(\"restrict type parameter `{}` with\", param) } else { \"implement\".to_string() }); for (i, trait_info) in candidates.iter().enumerate() { msg.push_str(&format!( \"ncandidate #{}: `{}`\",", "commid": "rust_pr_65242.0"}], "negative_passages": []}
{"query_id": "q-en-rust-535353bdfb3d3e5ccefea87e0d54068dc3ecfb7c686a60b3a012fad9cbb5afa0", "query": "The following example code yields an error: It's unclear from this error message how to proceed - I'm not sure of the syntax, location, or where I should be putting the \"GetString\" trait bound. It should be shown with a better example or link when this error is presented, as the current message is not sufficient for a resolution to be arrived at.\ncc\nFixing in . To fix your code write:\nThanks!", "positive_passages": [{"docid": "doc-en-rust-5d8dc1e940ab0b118710e52c0419c2d37d382cf6c9ed5ecfc4d941c6b5fd48c0", "text": " // run-rustfix // check-only #[derive(Debug)] struct Demo { a: String } trait GetString { fn get_a(&self) -> &String; } trait UseString: std::fmt::Debug + GetString { fn use_string(&self) { println!(\"{:?}\", self.get_a()); //~ ERROR no method named `get_a` found for type `&Self` } } trait UseString2: GetString { fn use_string(&self) { println!(\"{:?}\", self.get_a()); //~ ERROR no method named `get_a` found for type `&Self` } } impl GetString for Demo { fn get_a(&self) -> &String { &self.a } } impl UseString for Demo {} impl UseString2 for Demo {} #[cfg(test)] mod tests { use crate::{Demo, UseString}; #[test] fn it_works() { let d = Demo { a: \"test\".to_string() }; d.use_string(); } } fn main() {} ", "commid": "rust_pr_65242.0"}], "negative_passages": []}
{"query_id": "q-en-rust-535353bdfb3d3e5ccefea87e0d54068dc3ecfb7c686a60b3a012fad9cbb5afa0", "query": "The following example code yields an error: It's unclear from this error message how to proceed - I'm not sure of the syntax, location, or where I should be putting the \"GetString\" trait bound. It should be shown with a better example or link when this error is presented, as the current message is not sufficient for a resolution to be arrived at.\ncc\nFixing in . To fix your code write:\nThanks!", "positive_passages": [{"docid": "doc-en-rust-fc8d26f6ecda2e1ef1d0d07c9cb081fb850db21b39efc8ed840014f802d9ce5a", "text": " // run-rustfix // check-only #[derive(Debug)] struct Demo { a: String } trait GetString { fn get_a(&self) -> &String; } trait UseString: std::fmt::Debug { fn use_string(&self) { println!(\"{:?}\", self.get_a()); //~ ERROR no method named `get_a` found for type `&Self` } } trait UseString2 { fn use_string(&self) { println!(\"{:?}\", self.get_a()); //~ ERROR no method named `get_a` found for type `&Self` } } impl GetString for Demo { fn get_a(&self) -> &String { &self.a } } impl UseString for Demo {} impl UseString2 for Demo {} #[cfg(test)] mod tests { use crate::{Demo, UseString}; #[test] fn it_works() { let d = Demo { a: \"test\".to_string() }; d.use_string(); } } fn main() {} ", "commid": "rust_pr_65242.0"}], "negative_passages": []}
{"query_id": "q-en-rust-535353bdfb3d3e5ccefea87e0d54068dc3ecfb7c686a60b3a012fad9cbb5afa0", "query": "The following example code yields an error: It's unclear from this error message how to proceed - I'm not sure of the syntax, location, or where I should be putting the \"GetString\" trait bound. It should be shown with a better example or link when this error is presented, as the current message is not sufficient for a resolution to be arrived at.\ncc\nFixing in . To fix your code write:\nThanks!", "positive_passages": [{"docid": "doc-en-rust-b4f8e9b750d3cd8e7363971026bcea66587a5271e4f60b8e1c8081e3fbff6e02", "text": " error[E0599]: no method named `get_a` found for type `&Self` in the current scope --> $DIR/constrain-trait.rs:15:31 | LL | println!(\"{:?}\", self.get_a()); | ^^^^^ method not found in `&Self` | = help: items from traits can only be used if the type parameter is bounded by the trait help: the following trait defines an item `get_a`, perhaps you need to add another supertrait for it: | LL | trait UseString: std::fmt::Debug + GetString { | ^^^^^^^^^^^ error[E0599]: no method named `get_a` found for type `&Self` in the current scope --> $DIR/constrain-trait.rs:21:31 | LL | println!(\"{:?}\", self.get_a()); | ^^^^^ method not found in `&Self` | = help: items from traits can only be used if the type parameter is bounded by the trait help: the following trait defines an item `get_a`, perhaps you need to add a supertrait for it: | LL | trait UseString2: GetString { | ^^^^^^^^^^^ error: aborting due to 2 previous errors For more information about this error, try `rustc --explain E0599`. ", "commid": "rust_pr_65242.0"}], "negative_passages": []}
{"query_id": "q-en-rust-0626f870abdc7b02cb3164359835e6879e411877a00fb45670709faceda10775", "query": "cc Code first. This code: , gives this set of error messages: The part that I'm most concerned about is: Before eRFC \"if- and while-let-chains, take 2\" (rust-lang/rfcs, , ), this code USED to result in this error message: The reason I know this is because , in the section where we're trying to explain the difference between statements and expressions. The error message I'm seeing now is muddying the waters by saying \" expressions\". Based on my reading of the eRFC, it's only supposed to change , but as kind of a side effect is now sort-of an expression? The don't clear it up for me, as they only describe , not plain s. What I expected is that even though the eRFC has been accepted and implemented, plain would still be considered a statement and the error message wouldn't talk about \" expressions\". If my expectation is valid, then the compiler error message is a bug. If my expectation is invalid, please let me know so that I can work on updating the book. Thanks! $DIR/break-outside-loop.rs:30:13 | LL | || { | -- enclosing closure LL | break 'lab; | ^^^^^^^^^^ cannot `break` inside of a closure error: aborting due to 6 previous errors Some errors have detailed explanations: E0267, E0268. For more information about an error, try `rustc --explain E0267`.", "commid": "rust_pr_65518.0"}], "negative_passages": []}
{"query_id": "q-en-rust-017e8c583a49d3178e5e0e66e0628a3f29d6120f4d2fd75bea1da2fe14a34301", "query": "The following code was working in nightly-2019-09-05, but no longer works in nightly-2019-10-15. The workaround is not quite intuitive. () Errors: $DIR/issue-64130-4-async-move.rs:19:15 | LL | match client.status() {", "commid": "rust_pr_68269.0"}], "negative_passages": []}
{"query_id": "q-en-rust-017e8c583a49d3178e5e0e66e0628a3f29d6120f4d2fd75bea1da2fe14a34301", "query": "The following code was working in nightly-2019-09-05, but no longer works in nightly-2019-10-15. The workaround is not quite intuitive. () Errors: $DIR/issue-65436-raw-ptr-not-send.rs:12:5 | LL | fn assert_send version = \"0.2.77\" version = \"0.2.79\" source = \"registry+https://github.com/rust-lang/crates.io-index\" checksum = \"f2f96b10ec2560088a8e76961b00d47107b3a625fecb76dedb29ee7ccbf98235\" checksum = \"2448f6066e80e3bfc792e9c98bf705b4b0fc6e8ef5b43e5889aff0eaa9c58743\" dependencies = [ \"rustc-std-workspace-core\", ]", "commid": "rust_pr_77386.0"}], "negative_passages": []}
{"query_id": "q-en-rust-2ab3cfaceab02dfac85b57b18ecfa4819fd9cdcc936be6a1a67a2bf8c3d94d82", "query": "Reasons for: . According to , musl caused a program to run 9x slower. Musl is also known to be slower in certain configurations in C. This varies a lot with the use case and some might even see a performance increase with musl. docker containers using . It is possible to link glibc statically as demonstrated . From what I can tell, on the user facing side, this would be enable using as described in .\nThe reddit thread mentions libm. And another source of speedups might be the malloc implementation (glibc 2.26 introduced thread-local caches). One can already use custom global allocators, maybe it would make sense to link a custom libm too?\nFirst blocker is the license. Glibc is licensed under LGPL, it has dynamic linking exception so one can freely link dynamically to it. Static linking doesn't fall under this exception. I'm not a lawyer but I think Rust would have to be licensed as (L)GPL to fit it. Another thing is glibc doesn't really support static linking, sure it works well for small software but bites you when you need more of it's features.\n(Obviously not a lawyer but), I'm pretty sure that as long as the code it is linked against is (L)GPLed, it should work.\nAlso as long as the program it is linked against is licensed under a GPL compatible license, it is allowed. The MIT license, is compatible. All this information can be confirmed by looking at the above link. You can even link proprietary code against LGPL if you provide the object files.\nbecause of we need special linking arguments because of the symbols. see also for and glibc uses for the nss modules.\nThe former should be doable. The latter is a tradeoff that people will need to understand if they statically link glibc, and it's something they already have to deal with if they use .\nImplementation instructions: Add an entry for glibc-based targets you want to support to (Same as for musl, but -, because is undesirable ().) Perhaps libm or libpthread will need to be linked in the same way. Set to in target specifications for glibc-based targets you want to support (in ). Test that it actually works, this will require some building of libstd with cargo overridden libc dependency.\nI'd be happy to submit the necessary changes, if you can outline what needs doing. () See the comment above.\nThanks!\nI have this successfully working; just waiting on a release of the libc crate containing .", "positive_passages": [{"docid": "doc-en-rust-681de54f276df9f91faf671de9c8bbb5ad7a402249d5ab587a3135e440088aa2", "text": "base.position_independent_executables = true; base.has_elf_tls = false; base.requires_uwtable = true; base.crt_static_respected = false; base }", "commid": "rust_pr_77386.0"}], "negative_passages": []}
{"query_id": "q-en-rust-2ab3cfaceab02dfac85b57b18ecfa4819fd9cdcc936be6a1a67a2bf8c3d94d82", "query": "Reasons for: . According to , musl caused a program to run 9x slower. Musl is also known to be slower in certain configurations in C. This varies a lot with the use case and some might even see a performance increase with musl. docker containers using . It is possible to link glibc statically as demonstrated . From what I can tell, on the user facing side, this would be enable using as described in .\nThe reddit thread mentions libm. And another source of speedups might be the malloc implementation (glibc 2.26 introduced thread-local caches). One can already use custom global allocators, maybe it would make sense to link a custom libm too?\nFirst blocker is the license. Glibc is licensed under LGPL, it has dynamic linking exception so one can freely link dynamically to it. Static linking doesn't fall under this exception. I'm not a lawyer but I think Rust would have to be licensed as (L)GPL to fit it. Another thing is glibc doesn't really support static linking, sure it works well for small software but bites you when you need more of it's features.\n(Obviously not a lawyer but), I'm pretty sure that as long as the code it is linked against is (L)GPLed, it should work.\nAlso as long as the program it is linked against is licensed under a GPL compatible license, it is allowed. The MIT license, is compatible. All this information can be confirmed by looking at the above link. You can even link proprietary code against LGPL if you provide the object files.\nbecause of we need special linking arguments because of the symbols. see also for and glibc uses for the nss modules.\nThe former should be doable. The latter is a tradeoff that people will need to understand if they statically link glibc, and it's something they already have to deal with if they use .\nImplementation instructions: Add an entry for glibc-based targets you want to support to (Same as for musl, but -, because is undesirable ().) Perhaps libm or libpthread will need to be linked in the same way. Set to in target specifications for glibc-based targets you want to support (in ). Test that it actually works, this will require some building of libstd with cargo overridden libc dependency.\nI'd be happy to submit the necessary changes, if you can outline what needs doing. () See the comment above.\nThanks!\nI have this successfully working; just waiting on a release of the libc crate containing .", "positive_passages": [{"docid": "doc-en-rust-94e5488e445f9c91451318c93075d0e90a12a757c156c26a7cac5f0c7417e7c4", "text": "position_independent_executables: true, relro_level: RelroLevel::Full, has_elf_tls: true, crt_static_respected: true, ..Default::default() } }", "commid": "rust_pr_77386.0"}], "negative_passages": []}
{"query_id": "q-en-rust-2ab3cfaceab02dfac85b57b18ecfa4819fd9cdcc936be6a1a67a2bf8c3d94d82", "query": "Reasons for: . According to , musl caused a program to run 9x slower. Musl is also known to be slower in certain configurations in C. This varies a lot with the use case and some might even see a performance increase with musl. docker containers using . It is possible to link glibc statically as demonstrated . From what I can tell, on the user facing side, this would be enable using as described in .\nThe reddit thread mentions libm. And another source of speedups might be the malloc implementation (glibc 2.26 introduced thread-local caches). One can already use custom global allocators, maybe it would make sense to link a custom libm too?\nFirst blocker is the license. Glibc is licensed under LGPL, it has dynamic linking exception so one can freely link dynamically to it. Static linking doesn't fall under this exception. I'm not a lawyer but I think Rust would have to be licensed as (L)GPL to fit it. Another thing is glibc doesn't really support static linking, sure it works well for small software but bites you when you need more of it's features.\n(Obviously not a lawyer but), I'm pretty sure that as long as the code it is linked against is (L)GPLed, it should work.\nAlso as long as the program it is linked against is licensed under a GPL compatible license, it is allowed. The MIT license, is compatible. All this information can be confirmed by looking at the above link. You can even link proprietary code against LGPL if you provide the object files.\nbecause of we need special linking arguments because of the symbols. see also for and glibc uses for the nss modules.\nThe former should be doable. The latter is a tradeoff that people will need to understand if they statically link glibc, and it's something they already have to deal with if they use .\nImplementation instructions: Add an entry for glibc-based targets you want to support to (Same as for musl, but -, because is undesirable ().) Perhaps libm or libpthread will need to be linked in the same way. Set to in target specifications for glibc-based targets you want to support (in ). Test that it actually works, this will require some building of libstd with cargo overridden libc dependency.\nI'd be happy to submit the necessary changes, if you can outline what needs doing. () See the comment above.\nThanks!\nI have this successfully working; just waiting on a release of the libc crate containing .", "positive_passages": [{"docid": "doc-en-rust-72de40c2a86908bfdce1ce6b5b612a28c054189651e9447772688fa8a197698a", "text": "// These targets statically link libc by default base.crt_static_default = true; // These targets allow the user to choose between static and dynamic linking. base.crt_static_respected = true; base }", "commid": "rust_pr_77386.0"}], "negative_passages": []}
{"query_id": "q-en-rust-2ab3cfaceab02dfac85b57b18ecfa4819fd9cdcc936be6a1a67a2bf8c3d94d82", "query": "Reasons for: . According to , musl caused a program to run 9x slower. Musl is also known to be slower in certain configurations in C. This varies a lot with the use case and some might even see a performance increase with musl. docker containers using . It is possible to link glibc statically as demonstrated . From what I can tell, on the user facing side, this would be enable using as described in .\nThe reddit thread mentions libm. And another source of speedups might be the malloc implementation (glibc 2.26 introduced thread-local caches). One can already use custom global allocators, maybe it would make sense to link a custom libm too?\nFirst blocker is the license. Glibc is licensed under LGPL, it has dynamic linking exception so one can freely link dynamically to it. Static linking doesn't fall under this exception. I'm not a lawyer but I think Rust would have to be licensed as (L)GPL to fit it. Another thing is glibc doesn't really support static linking, sure it works well for small software but bites you when you need more of it's features.\n(Obviously not a lawyer but), I'm pretty sure that as long as the code it is linked against is (L)GPLed, it should work.\nAlso as long as the program it is linked against is licensed under a GPL compatible license, it is allowed. The MIT license, is compatible. All this information can be confirmed by looking at the above link. You can even link proprietary code against LGPL if you provide the object files.\nbecause of we need special linking arguments because of the symbols. see also for and glibc uses for the nss modules.\nThe former should be doable. The latter is a tradeoff that people will need to understand if they statically link glibc, and it's something they already have to deal with if they use .\nImplementation instructions: Add an entry for glibc-based targets you want to support to (Same as for musl, but -, because is undesirable ().) Perhaps libm or libpthread will need to be linked in the same way. Set to in target specifications for glibc-based targets you want to support (in ). Test that it actually works, this will require some building of libstd with cargo overridden libc dependency.\nI'd be happy to submit the necessary changes, if you can outline what needs doing. () See the comment above.\nThanks!\nI have this successfully working; just waiting on a release of the libc crate containing .", "positive_passages": [{"docid": "doc-en-rust-865cdda57dd6430fdc01fdf2792ef74703cc29b3550cbf045170bd83d44f6916", "text": "panic_unwind = { path = \"../panic_unwind\", optional = true } panic_abort = { path = \"../panic_abort\" } core = { path = \"../core\" } libc = { version = \"0.2.77\", default-features = false, features = ['rustc-dep-of-std'] } libc = { version = \"0.2.79\", default-features = false, features = ['rustc-dep-of-std'] } compiler_builtins = { version = \"0.1.35\" } profiler_builtins = { path = \"../profiler_builtins\", optional = true } unwind = { path = \"../unwind\" }", "commid": "rust_pr_77386.0"}], "negative_passages": []}
{"query_id": "q-en-rust-2ab3cfaceab02dfac85b57b18ecfa4819fd9cdcc936be6a1a67a2bf8c3d94d82", "query": "Reasons for: . According to , musl caused a program to run 9x slower. Musl is also known to be slower in certain configurations in C. This varies a lot with the use case and some might even see a performance increase with musl. docker containers using . It is possible to link glibc statically as demonstrated . From what I can tell, on the user facing side, this would be enable using as described in .\nThe reddit thread mentions libm. And another source of speedups might be the malloc implementation (glibc 2.26 introduced thread-local caches). One can already use custom global allocators, maybe it would make sense to link a custom libm too?\nFirst blocker is the license. Glibc is licensed under LGPL, it has dynamic linking exception so one can freely link dynamically to it. Static linking doesn't fall under this exception. I'm not a lawyer but I think Rust would have to be licensed as (L)GPL to fit it. Another thing is glibc doesn't really support static linking, sure it works well for small software but bites you when you need more of it's features.\n(Obviously not a lawyer but), I'm pretty sure that as long as the code it is linked against is (L)GPLed, it should work.\nAlso as long as the program it is linked against is licensed under a GPL compatible license, it is allowed. The MIT license, is compatible. All this information can be confirmed by looking at the above link. You can even link proprietary code against LGPL if you provide the object files.\nbecause of we need special linking arguments because of the symbols. see also for and glibc uses for the nss modules.\nThe former should be doable. The latter is a tradeoff that people will need to understand if they statically link glibc, and it's something they already have to deal with if they use .\nImplementation instructions: Add an entry for glibc-based targets you want to support to (Same as for musl, but -, because is undesirable ().) Perhaps libm or libpthread will need to be linked in the same way. Set to in target specifications for glibc-based targets you want to support (in ). Test that it actually works, this will require some building of libstd with cargo overridden libc dependency.\nI'd be happy to submit the necessary changes, if you can outline what needs doing. () See the comment above.\nThanks!\nI have this successfully working; just waiting on a release of the libc crate containing .", "positive_passages": [{"docid": "doc-en-rust-8bd01924e82c23e2343d595f06da461bed8c7f7dee3c489fc45de630d95a230e", "text": "} fn exited(&self) -> bool { // On Linux-like OSes this function is safe, on others it is not. See // libc issue: https://github.com/rust-lang/libc/issues/1888. #[cfg_attr( any(target_os = \"linux\", target_os = \"android\", target_os = \"emscripten\"), allow(unused_unsafe) )] unsafe { libc::WIFEXITED(self.0) } libc::WIFEXITED(self.0) } pub fn success(&self) -> bool {", "commid": "rust_pr_77386.0"}], "negative_passages": []}
{"query_id": "q-en-rust-2ab3cfaceab02dfac85b57b18ecfa4819fd9cdcc936be6a1a67a2bf8c3d94d82", "query": "Reasons for: . According to , musl caused a program to run 9x slower. Musl is also known to be slower in certain configurations in C. This varies a lot with the use case and some might even see a performance increase with musl. docker containers using . It is possible to link glibc statically as demonstrated . From what I can tell, on the user facing side, this would be enable using as described in .\nThe reddit thread mentions libm. And another source of speedups might be the malloc implementation (glibc 2.26 introduced thread-local caches). One can already use custom global allocators, maybe it would make sense to link a custom libm too?\nFirst blocker is the license. Glibc is licensed under LGPL, it has dynamic linking exception so one can freely link dynamically to it. Static linking doesn't fall under this exception. I'm not a lawyer but I think Rust would have to be licensed as (L)GPL to fit it. Another thing is glibc doesn't really support static linking, sure it works well for small software but bites you when you need more of it's features.\n(Obviously not a lawyer but), I'm pretty sure that as long as the code it is linked against is (L)GPLed, it should work.\nAlso as long as the program it is linked against is licensed under a GPL compatible license, it is allowed. The MIT license, is compatible. All this information can be confirmed by looking at the above link. You can even link proprietary code against LGPL if you provide the object files.\nbecause of we need special linking arguments because of the symbols. see also for and glibc uses for the nss modules.\nThe former should be doable. The latter is a tradeoff that people will need to understand if they statically link glibc, and it's something they already have to deal with if they use .\nImplementation instructions: Add an entry for glibc-based targets you want to support to (Same as for musl, but -, because is undesirable ().) Perhaps libm or libpthread will need to be linked in the same way. Set to in target specifications for glibc-based targets you want to support (in ). Test that it actually works, this will require some building of libstd with cargo overridden libc dependency.\nI'd be happy to submit the necessary changes, if you can outline what needs doing. () See the comment above.\nThanks!\nI have this successfully working; just waiting on a release of the libc crate containing .", "positive_passages": [{"docid": "doc-en-rust-a82e2ce3b21cbb12b5f52623839105347ca09140a4859f96a3ab38210040fb82", "text": "} pub fn code(&self) -> Option // On Linux-like OSes this function is safe, on others it is not. See // libc issue: https://github.com/rust-lang/libc/issues/1888. #[cfg_attr( any(target_os = \"linux\", target_os = \"android\", target_os = \"emscripten\"), allow(unused_unsafe) )] if self.exited() { Some(unsafe { libc::WEXITSTATUS(self.0) }) } else { None } if self.exited() { Some(libc::WEXITSTATUS(self.0)) } else { None } } pub fn signal(&self) -> Option // On Linux-like OSes this function is safe, on others it is not. See // libc issue: https://github.com/rust-lang/libc/issues/1888. #[cfg_attr( any(target_os = \"linux\", target_os = \"android\", target_os = \"emscripten\"), allow(unused_unsafe) )] if !self.exited() { Some(unsafe { libc::WTERMSIG(self.0) }) } else { None } if !self.exited() { Some(libc::WTERMSIG(self.0)) } else { None } } }", "commid": "rust_pr_77386.0"}], "negative_passages": []}
{"query_id": "q-en-rust-2ab3cfaceab02dfac85b57b18ecfa4819fd9cdcc936be6a1a67a2bf8c3d94d82", "query": "Reasons for: . According to , musl caused a program to run 9x slower. Musl is also known to be slower in certain configurations in C. This varies a lot with the use case and some might even see a performance increase with musl. docker containers using . It is possible to link glibc statically as demonstrated . From what I can tell, on the user facing side, this would be enable using as described in .\nThe reddit thread mentions libm. And another source of speedups might be the malloc implementation (glibc 2.26 introduced thread-local caches). One can already use custom global allocators, maybe it would make sense to link a custom libm too?\nFirst blocker is the license. Glibc is licensed under LGPL, it has dynamic linking exception so one can freely link dynamically to it. Static linking doesn't fall under this exception. I'm not a lawyer but I think Rust would have to be licensed as (L)GPL to fit it. Another thing is glibc doesn't really support static linking, sure it works well for small software but bites you when you need more of it's features.\n(Obviously not a lawyer but), I'm pretty sure that as long as the code it is linked against is (L)GPLed, it should work.\nAlso as long as the program it is linked against is licensed under a GPL compatible license, it is allowed. The MIT license, is compatible. All this information can be confirmed by looking at the above link. You can even link proprietary code against LGPL if you provide the object files.\nbecause of we need special linking arguments because of the symbols. see also for and glibc uses for the nss modules.\nThe former should be doable. The latter is a tradeoff that people will need to understand if they statically link glibc, and it's something they already have to deal with if they use .\nImplementation instructions: Add an entry for glibc-based targets you want to support to (Same as for musl, but -, because is undesirable ().) Perhaps libm or libpthread will need to be linked in the same way. Set to in target specifications for glibc-based targets you want to support (in ). Test that it actually works, this will require some building of libstd with cargo overridden libc dependency.\nI'd be happy to submit the necessary changes, if you can outline what needs doing. () See the comment above.\nThanks!\nI have this successfully working; just waiting on a release of the libc crate containing .", "positive_passages": [{"docid": "doc-en-rust-46aadcd4808a6fdd896b5959b1b561b82d66d05ded39425b2bfc7cd25218af8e", "text": "[dependencies] core = { path = \"../core\" } libc = { version = \"0.2.51\", features = ['rustc-dep-of-std'], default-features = false } libc = { version = \"0.2.79\", features = ['rustc-dep-of-std'], default-features = false } compiler_builtins = \"0.1.0\" cfg-if = \"0.1.8\"", "commid": "rust_pr_77386.0"}], "negative_passages": []}
{"query_id": "q-en-rust-2ab3cfaceab02dfac85b57b18ecfa4819fd9cdcc936be6a1a67a2bf8c3d94d82", "query": "Reasons for: . According to , musl caused a program to run 9x slower. Musl is also known to be slower in certain configurations in C. This varies a lot with the use case and some might even see a performance increase with musl. docker containers using . It is possible to link glibc statically as demonstrated . From what I can tell, on the user facing side, this would be enable using as described in .\nThe reddit thread mentions libm. And another source of speedups might be the malloc implementation (glibc 2.26 introduced thread-local caches). One can already use custom global allocators, maybe it would make sense to link a custom libm too?\nFirst blocker is the license. Glibc is licensed under LGPL, it has dynamic linking exception so one can freely link dynamically to it. Static linking doesn't fall under this exception. I'm not a lawyer but I think Rust would have to be licensed as (L)GPL to fit it. Another thing is glibc doesn't really support static linking, sure it works well for small software but bites you when you need more of it's features.\n(Obviously not a lawyer but), I'm pretty sure that as long as the code it is linked against is (L)GPLed, it should work.\nAlso as long as the program it is linked against is licensed under a GPL compatible license, it is allowed. The MIT license, is compatible. All this information can be confirmed by looking at the above link. You can even link proprietary code against LGPL if you provide the object files.\nbecause of we need special linking arguments because of the symbols. see also for and glibc uses for the nss modules.\nThe former should be doable. The latter is a tradeoff that people will need to understand if they statically link glibc, and it's something they already have to deal with if they use .\nImplementation instructions: Add an entry for glibc-based targets you want to support to (Same as for musl, but -, because is undesirable ().) Perhaps libm or libpthread will need to be linked in the same way. Set to in target specifications for glibc-based targets you want to support (in ). Test that it actually works, this will require some building of libstd with cargo overridden libc dependency.\nI'd be happy to submit the necessary changes, if you can outline what needs doing. () See the comment above.\nThanks!\nI have this successfully working; just waiting on a release of the libc crate containing .", "positive_passages": [{"docid": "doc-en-rust-c05491ea5c7d1287ef2d9c3e7a85b0597b39e3edc35caa93ab98fd201409f7d8", "text": "} else if target.contains(\"x86_64-fortanix-unknown-sgx\") { llvm_libunwind::compile(); } else if target.contains(\"linux\") { // linking for Linux is handled in lib.rs if target.contains(\"musl\") { // linking for musl is handled in lib.rs llvm_libunwind::compile(); } else if !target.contains(\"android\") { println!(\"cargo:rustc-link-lib=gcc_s\"); } } else if target.contains(\"freebsd\") { println!(\"cargo:rustc-link-lib=gcc_s\");", "commid": "rust_pr_77386.0"}], "negative_passages": []}
{"query_id": "q-en-rust-2ab3cfaceab02dfac85b57b18ecfa4819fd9cdcc936be6a1a67a2bf8c3d94d82", "query": "Reasons for: . According to , musl caused a program to run 9x slower. Musl is also known to be slower in certain configurations in C. This varies a lot with the use case and some might even see a performance increase with musl. docker containers using . It is possible to link glibc statically as demonstrated . From what I can tell, on the user facing side, this would be enable using as described in .\nThe reddit thread mentions libm. And another source of speedups might be the malloc implementation (glibc 2.26 introduced thread-local caches). One can already use custom global allocators, maybe it would make sense to link a custom libm too?\nFirst blocker is the license. Glibc is licensed under LGPL, it has dynamic linking exception so one can freely link dynamically to it. Static linking doesn't fall under this exception. I'm not a lawyer but I think Rust would have to be licensed as (L)GPL to fit it. Another thing is glibc doesn't really support static linking, sure it works well for small software but bites you when you need more of it's features.\n(Obviously not a lawyer but), I'm pretty sure that as long as the code it is linked against is (L)GPLed, it should work.\nAlso as long as the program it is linked against is licensed under a GPL compatible license, it is allowed. The MIT license, is compatible. All this information can be confirmed by looking at the above link. You can even link proprietary code against LGPL if you provide the object files.\nbecause of we need special linking arguments because of the symbols. see also for and glibc uses for the nss modules.\nThe former should be doable. The latter is a tradeoff that people will need to understand if they statically link glibc, and it's something they already have to deal with if they use .\nImplementation instructions: Add an entry for glibc-based targets you want to support to (Same as for musl, but -, because is undesirable ().) Perhaps libm or libpthread will need to be linked in the same way. Set to in target specifications for glibc-based targets you want to support (in ). Test that it actually works, this will require some building of libstd with cargo overridden libc dependency.\nI'd be happy to submit the necessary changes, if you can outline what needs doing. () See the comment above.\nThanks!\nI have this successfully working; just waiting on a release of the libc crate containing .", "positive_passages": [{"docid": "doc-en-rust-e6ed2e5564a94a6e1bc77fb1dfafe4469e0fe0d969438656bece631e68052dab", "text": "#[link(name = \"gcc_s\", cfg(not(target_feature = \"crt-static\")))] extern \"C\" {} // When building with crt-static, we get `gcc_eh` from the `libc` crate, since // glibc needs it, and needs it listed later on the linker command line. We // don't want to duplicate it here. #[cfg(all(target_os = \"linux\", target_env = \"gnu\", not(feature = \"llvm-libunwind\")))] #[link(name = \"gcc_s\", cfg(not(target_feature = \"crt-static\")))] extern \"C\" {} #[cfg(target_os = \"redox\")] #[link(name = \"gcc_eh\", kind = \"static-nobundle\", cfg(target_feature = \"crt-static\"))] #[link(name = \"gcc_s\", cfg(not(target_feature = \"crt-static\")))]", "commid": "rust_pr_77386.0"}], "negative_passages": []}
{"query_id": "q-en-rust-0594d473233460ab81a84b76ecf113288d0f1915abf241ae595d34670daf459d", "query": "On 2019-10-24 we had a CI outage due to the ca-certificates msys2 package being broken. The hack implemented (:heart:) is to download an older, known good package beforehand and install it after msys2 updated its packages. Once is fixed we should revert the hack, but I think at this point it's worth investing in vendoring msys2 as a whole.\nmsys2 is using and other tools from archlinux distro. Arch Linux provides per-day official repositories snapshots. User can \"pin\" to a specific date. I think it is possible to reuse that mechanism to vendoring msys2 as a whole, by providing a mirror of a specific date's version. See here for more descriptions .\nthat would be nice but msys doesn't remove old packages from their repo so the mirror would be enormous. Search for in , only those packages take dozens of GiB. Also extracting it from archive would make CI a bit faster because you won't be modifying thousands of small files over and over. On Linux that is not a problem but working through cygwin emulation a bit struggles on this part.\nNo update on this, planning to discuss what to vendor at the all hands.\nAt the last infra meeting we decided a good step is to move towards using the pre-installed msys2 packages. In theory this is as simple as removing the msys2 installation step and pointing things at the right location on the images in CI.", "positive_passages": [{"docid": "doc-en-rust-9e467815931607ec7a17a3a21a91ca7d5c7b2cae5bf664a7c8bf1bc07f9770ea", "text": "- name: install MSYS2 run: src/ci/scripts/install-msys2.sh if: success() && !env.SKIP_JOB - name: install MSYS2 packages run: src/ci/scripts/install-msys2-packages.sh if: success() && !env.SKIP_JOB - name: install MinGW run: src/ci/scripts/install-mingw.sh if: success() && !env.SKIP_JOB", "commid": "rust_pr_73188.0"}], "negative_passages": []}
{"query_id": "q-en-rust-0594d473233460ab81a84b76ecf113288d0f1915abf241ae595d34670daf459d", "query": "On 2019-10-24 we had a CI outage due to the ca-certificates msys2 package being broken. The hack implemented (:heart:) is to download an older, known good package beforehand and install it after msys2 updated its packages. Once is fixed we should revert the hack, but I think at this point it's worth investing in vendoring msys2 as a whole.\nmsys2 is using and other tools from archlinux distro. Arch Linux provides per-day official repositories snapshots. User can \"pin\" to a specific date. I think it is possible to reuse that mechanism to vendoring msys2 as a whole, by providing a mirror of a specific date's version. See here for more descriptions .\nthat would be nice but msys doesn't remove old packages from their repo so the mirror would be enormous. Search for in , only those packages take dozens of GiB. Also extracting it from archive would make CI a bit faster because you won't be modifying thousands of small files over and over. On Linux that is not a problem but working through cygwin emulation a bit struggles on this part.\nNo update on this, planning to discuss what to vendor at the all hands.\nAt the last infra meeting we decided a good step is to move towards using the pre-installed msys2 packages. In theory this is as simple as removing the msys2 installation step and pointing things at the right location on the images in CI.", "positive_passages": [{"docid": "doc-en-rust-82e51614b0413a4984b2d835bcd05dd04969ac90c8b75bf724a61f0f09a3efbc", "text": "displayName: Install msys2 condition: and(succeeded(), not(variables.SKIP_JOB)) - bash: src/ci/scripts/install-msys2-packages.sh displayName: Install msys2 packages condition: and(succeeded(), not(variables.SKIP_JOB)) - bash: src/ci/scripts/install-mingw.sh displayName: Install MinGW condition: and(succeeded(), not(variables.SKIP_JOB))", "commid": "rust_pr_73188.0"}], "negative_passages": []}
{"query_id": "q-en-rust-0594d473233460ab81a84b76ecf113288d0f1915abf241ae595d34670daf459d", "query": "On 2019-10-24 we had a CI outage due to the ca-certificates msys2 package being broken. The hack implemented (:heart:) is to download an older, known good package beforehand and install it after msys2 updated its packages. Once is fixed we should revert the hack, but I think at this point it's worth investing in vendoring msys2 as a whole.\nmsys2 is using and other tools from archlinux distro. Arch Linux provides per-day official repositories snapshots. User can \"pin\" to a specific date. I think it is possible to reuse that mechanism to vendoring msys2 as a whole, by providing a mirror of a specific date's version. See here for more descriptions .\nthat would be nice but msys doesn't remove old packages from their repo so the mirror would be enormous. Search for in , only those packages take dozens of GiB. Also extracting it from archive would make CI a bit faster because you won't be modifying thousands of small files over and over. On Linux that is not a problem but working through cygwin emulation a bit struggles on this part.\nNo update on this, planning to discuss what to vendor at the all hands.\nAt the last infra meeting we decided a good step is to move towards using the pre-installed msys2 packages. In theory this is as simple as removing the msys2 installation step and pointing things at the right location on the images in CI.", "positive_passages": [{"docid": "doc-en-rust-2edbba3efef26f18ee349936e4a56cc032560c9579ebe118669ecacb5081c60c", "text": "run: src/ci/scripts/install-msys2.sh <<: *step - name: install MSYS2 packages run: src/ci/scripts/install-msys2-packages.sh <<: *step - name: install MinGW run: src/ci/scripts/install-mingw.sh <<: *step", "commid": "rust_pr_73188.0"}], "negative_passages": []}
{"query_id": "q-en-rust-0594d473233460ab81a84b76ecf113288d0f1915abf241ae595d34670daf459d", "query": "On 2019-10-24 we had a CI outage due to the ca-certificates msys2 package being broken. The hack implemented (:heart:) is to download an older, known good package beforehand and install it after msys2 updated its packages. Once is fixed we should revert the hack, but I think at this point it's worth investing in vendoring msys2 as a whole.\nmsys2 is using and other tools from archlinux distro. Arch Linux provides per-day official repositories snapshots. User can \"pin\" to a specific date. I think it is possible to reuse that mechanism to vendoring msys2 as a whole, by providing a mirror of a specific date's version. See here for more descriptions .\nthat would be nice but msys doesn't remove old packages from their repo so the mirror would be enormous. Search for in , only those packages take dozens of GiB. Also extracting it from archive would make CI a bit faster because you won't be modifying thousands of small files over and over. On Linux that is not a problem but working through cygwin emulation a bit struggles on this part.\nNo update on this, planning to discuss what to vendor at the all hands.\nAt the last infra meeting we decided a good step is to move towards using the pre-installed msys2 packages. In theory this is as simple as removing the msys2 installation step and pointing things at the right location on the images in CI.", "positive_passages": [{"docid": "doc-en-rust-d10178388aa1c8283683c5dec762781d01812e08a026f43ca524d367aaecd20a", "text": " #!/bin/bash set -euo pipefail IFS=$'nt' source \"$(cd \"$(dirname \"$0\")\" && pwd)/../shared.sh\" if isWindows; then pacman -S --noconfirm --needed base-devel ca-certificates make diffutils tar binutils # Detect the native Python version installed on the agent. On GitHub # Actions, the C:hostedtoolcachewindowsPython directory contains a # subdirectory for each installed Python version. # # The -V flag of the sort command sorts the input by version number. native_python_version=\"$(ls /c/hostedtoolcache/windows/Python | sort -Vr | head -n 1)\" # Make sure we use the native python interpreter instead of some msys equivalent # one way or another. The msys interpreters seem to have weird path conversions # baked in which break LLVM's build system one way or another, so let's use the # native version which keeps everything as native as possible. python_home=\"/c/hostedtoolcache/windows/Python/${native_python_version}/x64\" cp \"${python_home}/python.exe\" \"${python_home}/python3.exe\" ciCommandAddPath \"C:hostedtoolcachewindowsPython${native_python_version}x64\" ciCommandAddPath \"C:hostedtoolcachewindowsPython${native_python_version}x64Scripts\" fi ", "commid": "rust_pr_73188.0"}], "negative_passages": []}
{"query_id": "q-en-rust-0594d473233460ab81a84b76ecf113288d0f1915abf241ae595d34670daf459d", "query": "On 2019-10-24 we had a CI outage due to the ca-certificates msys2 package being broken. The hack implemented (:heart:) is to download an older, known good package beforehand and install it after msys2 updated its packages. Once is fixed we should revert the hack, but I think at this point it's worth investing in vendoring msys2 as a whole.\nmsys2 is using and other tools from archlinux distro. Arch Linux provides per-day official repositories snapshots. User can \"pin\" to a specific date. I think it is possible to reuse that mechanism to vendoring msys2 as a whole, by providing a mirror of a specific date's version. See here for more descriptions .\nthat would be nice but msys doesn't remove old packages from their repo so the mirror would be enormous. Search for in , only those packages take dozens of GiB. Also extracting it from archive would make CI a bit faster because you won't be modifying thousands of small files over and over. On Linux that is not a problem but working through cygwin emulation a bit struggles on this part.\nNo update on this, planning to discuss what to vendor at the all hands.\nAt the last infra meeting we decided a good step is to move towards using the pre-installed msys2 packages. In theory this is as simple as removing the msys2 installation step and pointing things at the right location on the images in CI.", "positive_passages": [{"docid": "doc-en-rust-d40eaec0a74b3529b0d730cc78863e7219a38110f245200b22f7236ca2b78e73", "text": "#!/bin/bash # Download and install MSYS2, needed primarily for the test suite (run-make) but # also used by the MinGW toolchain for assembling things. # # FIXME: we should probe the default azure image and see if we can use the MSYS2 # toolchain there. (if there's even one there). For now though this gets the job # done. set -euo pipefail IFS=$'nt'", "commid": "rust_pr_73188.0"}], "negative_passages": []}
{"query_id": "q-en-rust-0594d473233460ab81a84b76ecf113288d0f1915abf241ae595d34670daf459d", "query": "On 2019-10-24 we had a CI outage due to the ca-certificates msys2 package being broken. The hack implemented (:heart:) is to download an older, known good package beforehand and install it after msys2 updated its packages. Once is fixed we should revert the hack, but I think at this point it's worth investing in vendoring msys2 as a whole.\nmsys2 is using and other tools from archlinux distro. Arch Linux provides per-day official repositories snapshots. User can \"pin\" to a specific date. I think it is possible to reuse that mechanism to vendoring msys2 as a whole, by providing a mirror of a specific date's version. See here for more descriptions .\nthat would be nice but msys doesn't remove old packages from their repo so the mirror would be enormous. Search for in , only those packages take dozens of GiB. Also extracting it from archive would make CI a bit faster because you won't be modifying thousands of small files over and over. On Linux that is not a problem but working through cygwin emulation a bit struggles on this part.\nNo update on this, planning to discuss what to vendor at the all hands.\nAt the last infra meeting we decided a good step is to move towards using the pre-installed msys2 packages. In theory this is as simple as removing the msys2 installation step and pointing things at the right location on the images in CI.", "positive_passages": [{"docid": "doc-en-rust-3642795ec531c306b07feeee17a3d3afcc245a9f38920b5e99448f5d15f0e4cf", "text": "source \"$(cd \"$(dirname \"$0\")\" && pwd)/../shared.sh\" if isWindows; then # Pre-followed the api/v2 URL to the CDN since the API can be a bit flakey curl -sSL https://packages.chocolatey.org/msys2.20190524.0.0.20191030.nupkg > msys2.nupkg curl -sSL https://packages.chocolatey.org/chocolatey-core.extension.1.3.5.1.nupkg > chocolatey-core.extension.nupkg choco install -s . msys2 --params=\"/InstallDir:$(ciCheckoutPath)/msys2 /NoPath\" -y --no-progress rm msys2.nupkg chocolatey-core.extension.nupkg mkdir -p \"$(ciCheckoutPath)/msys2/home/${USERNAME}\" ciCommandAddPath \"$(ciCheckoutPath)/msys2/usr/bin\" msys2Path=\"c:/msys64\" mkdir -p \"${msys2Path}/home/${USERNAME}\" ciCommandAddPath \"${msys2Path}/usr/bin\" echo \"switching shell to use our own bash\" ciCommandSetEnv CI_OVERRIDE_SHELL \"$(ciCheckoutPath)/msys2/usr/bin/bash.exe\" ciCommandSetEnv CI_OVERRIDE_SHELL \"${msys2Path}/usr/bin/bash.exe\" # Detect the native Python version installed on the agent. On GitHub # Actions, the C:hostedtoolcachewindowsPython directory contains a # subdirectory for each installed Python version. # # The -V flag of the sort command sorts the input by version number. native_python_version=\"$(ls /c/hostedtoolcache/windows/Python | sort -Vr | head -n 1)\" # Make sure we use the native python interpreter instead of some msys equivalent # one way or another. The msys interpreters seem to have weird path conversions # baked in which break LLVM's build system one way or another, so let's use the # native version which keeps everything as native as possible. python_home=\"/c/hostedtoolcache/windows/Python/${native_python_version}/x64\" cp \"${python_home}/python.exe\" \"${python_home}/python3.exe\" ciCommandAddPath \"C:hostedtoolcachewindowsPython${native_python_version}x64\" ciCommandAddPath \"C:hostedtoolcachewindowsPython${native_python_version}x64Scripts\" fi", "commid": "rust_pr_73188.0"}], "negative_passages": []}
{"query_id": "q-en-rust-aff97ec191aea99de5c89b95e89c19f74b815e377511c0f4750bd408cafd42ab", "query": "Hi, I'm building a project and I'm using the nightly channel. I was testing it with several builds between 2019-09-20 and 2019-10-27, all with the same results \u2013 when targetting native, the compilation succeeds, when targetting WASM, we see ICE. Unfortunately, it was hard for me to minimize the example. Fortunately, the codebase is pretty small. Here is the exact commit: After downloading it you can run and it should succeed. But if you run (which is 1-line wrapper over ), we get: After commenting out this line, everything should compile OK (WASM target should compile as well): EDIT The backtrace from the compiler is not very helpful: However, there is some additional info:\nThat code is a mess! After an hour of digging through the code and removing half of it - the code that triggers the error is: I try to produce a MCVE out of it currently\nYes, the code is in a very work-in-progress state, so there are a lot of places which should be refactored or even removed, sorry for that. The line that I mentioned in my original post calls the function that you have pointed to. The function alone compiles well to WASM. It does not compile when used as follow:\nSeems to be related to . Here's something small-ish: I may give it a shot later, but now I'm gonna head home :)\nthat is amazing! I've created yet shorter version (still, there may be things to simplify here):\nmodify labels: -O-wasm +F-typealiasimpl_trait\n< 50 loc :tada:\nthank you for tracking it down\ncould you please change the title of this issue to something different, e.g. \"ICE with impl Fn alias\"? It's not about WASM anymore ;)\nBecause a lot has changed, here's a summary: version = \"13.1.0\" version = \"13.2.0\" source = \"registry+https://github.com/rust-lang/crates.io-index\" checksum = \"dd42951eb35079520ee29b7efbac654d85821b397ef88c8151600ef7e2d00217\" checksum = \"91d767c183a7e58618a609499d359ce3820700b3ebb4823a18c343b4a2a41a0d\" dependencies = [ \"futures\", \"log\",", "commid": "rust_pr_65957.0"}], "negative_passages": []}
{"query_id": "q-en-rust-850c94e9c8ed398c6b33b6954bd2411490d72e4af922592db5032013124867f7", "query": "Hello, this is your friendly neighborhood mergebot. After merging PR rust-lang/rust, I observed that the tool rls no longer builds. A follow-up PR to the repository is needed to fix the fallout. cc do you think you would have time to do the follow-up work? If so, that would be great! cc the PR reviewer, and nominating for compiler team prioritization.", "positive_passages": [{"docid": "doc-en-rust-fe27cf5b5c1de90933256cfe46f8a405946e8573ad277dbc962767d68904df70", "text": "[[package]] name = \"rls\" version = \"1.39.0\" version = \"1.40.0\" dependencies = [ \"cargo\", \"cargo_metadata 0.8.0\", \"clippy_lints\", \"crossbeam-channel\", \"difference\", \"env_logger 0.6.2\", \"env_logger 0.7.0\", \"failure\", \"futures\", \"heck\",", "commid": "rust_pr_65957.0"}], "negative_passages": []}
{"query_id": "q-en-rust-850c94e9c8ed398c6b33b6954bd2411490d72e4af922592db5032013124867f7", "query": "Hello, this is your friendly neighborhood mergebot. After merging PR rust-lang/rust, I observed that the tool rls no longer builds. A follow-up PR to the repository is needed to fix the fallout. cc do you think you would have time to do the follow-up work? If so, that would be great! cc the PR reviewer, and nominating for compiler team prioritization.", "positive_passages": [{"docid": "doc-en-rust-9b66978004e6c3dd8adc6b45dead2b7f5a46767f3c972662e0499cd5828ae748", "text": "\"num_cpus\", \"ordslice\", \"racer\", \"rand 0.6.1\", \"rand 0.7.0\", \"rayon\", \"regex\", \"rls-analysis\",", "commid": "rust_pr_65957.0"}], "negative_passages": []}
{"query_id": "q-en-rust-850c94e9c8ed398c6b33b6954bd2411490d72e4af922592db5032013124867f7", "query": "Hello, this is your friendly neighborhood mergebot. After merging PR rust-lang/rust, I observed that the tool rls no longer builds. A follow-up PR to the repository is needed to fix the fallout. cc do you think you would have time to do the follow-up work? If so, that would be great! cc the PR reviewer, and nominating for compiler team prioritization.", "positive_passages": [{"docid": "doc-en-rust-15089504226b79a5c4300aa64f93c6ac382a52ff85c27bc06f8b6ade9fa45739", "text": "\"rls-rustc\", \"rls-span\", \"rls-vfs\", \"rustc-serialize\", \"rustc-workspace-hack\", \"rustc_tools_util 0.2.0 (registry+https://github.com/rust-lang/crates.io-index)\", \"rustfmt-nightly\",", "commid": "rust_pr_65957.0"}], "negative_passages": []}
{"query_id": "q-en-rust-850c94e9c8ed398c6b33b6954bd2411490d72e4af922592db5032013124867f7", "query": "Hello, this is your friendly neighborhood mergebot. After merging PR rust-lang/rust, I observed that the tool rls no longer builds. A follow-up PR to the repository is needed to fix the fallout. cc do you think you would have time to do the follow-up work? If so, that would be great! cc the PR reviewer, and nominating for compiler team prioritization.", "positive_passages": [{"docid": "doc-en-rust-064cb92de7efe8859fb5035f9bc2fa0b464e76363a03d3a3f96b7012eaee46a1", "text": "version = \"0.6.0\" dependencies = [ \"clippy_lints\", \"env_logger 0.6.2\", \"env_logger 0.7.0\", \"failure\", \"futures\", \"log\", \"rand 0.6.1\", \"rand 0.7.0\", \"rls-data\", \"rls-ipc\", \"serde\",", "commid": "rust_pr_65957.0"}], "negative_passages": []}
{"query_id": "q-en-rust-850c94e9c8ed398c6b33b6954bd2411490d72e4af922592db5032013124867f7", "query": "Hello, this is your friendly neighborhood mergebot. After merging PR rust-lang/rust, I observed that the tool rls no longer builds. A follow-up PR to the repository is needed to fix the fallout. cc do you think you would have time to do the follow-up work? If so, that would be great! cc the PR reviewer, and nominating for compiler team prioritization.", "positive_passages": [{"docid": "doc-en-rust-52ac8ddd14dd95fc60b7252e3183a4652a7af9e34665736a21f94a99cbe06e44", "text": "] [[package]] name = \"rustc-serialize\" version = \"0.3.24\" source = \"registry+https://github.com/rust-lang/crates.io-index\" checksum = \"dcf128d1287d2ea9d80910b5f1120d0b8eede3fbf1abe91c40d39ea7d51e6fda\" [[package]] name = \"rustc-std-workspace-alloc\" version = \"1.99.0\" dependencies = [", "commid": "rust_pr_65957.0"}], "negative_passages": []}
{"query_id": "q-en-rust-850c94e9c8ed398c6b33b6954bd2411490d72e4af922592db5032013124867f7", "query": "Hello, this is your friendly neighborhood mergebot. After merging PR rust-lang/rust, I observed that the tool rls no longer builds. A follow-up PR to the repository is needed to fix the fallout. cc do you think you would have time to do the follow-up work? If so, that would be great! cc the PR reviewer, and nominating for compiler team prioritization.", "positive_passages": [{"docid": "doc-en-rust-6a9323dde171f77890d1d5cff7514ed82343d328d737f2e190b856c0f08b068e", "text": " Subproject commit a18df16181947edd5eb593ea0f2321e0035448ee Subproject commit 5db91c7b94ca81eead6b25bcf6196b869a44ece0 ", "commid": "rust_pr_65957.0"}], "negative_passages": []}
{"query_id": "q-en-rust-0ba470772d9b9a7362d68a5614c6cf43d8a346aaed3be651a5f0a378dda192ef", "query": "Hi there! This is the second time I've compiled code with Rust ever, and my first time filing a bug report, so I may need some guidance if the info here isn't quite enough to work with :). Learning about println! in Rust by Example, I incorrectly entered the below code. I looked at the documentation and realized I had made a mistake, but the compiler still panicked and recommended I open a bug report. I really believe in Rust and want to see this project succeed, hopefully this information is helpful! I tried this code: println!(\"Pi is roughly: {:.*}, 3, 3.\") Expected to see this happen: \"Pi is roughly 3.142\" Instead, this happened: thread 'rustc' panicked at 'index out of bounds: the len is 0 but the index is 0', note: run with environment variable to display a backtrace. error: internal compiler error: unexpected panic note: the compiler unexpectedly panicked. this is a bug. note: we would appreciate a bug report: note: rustc 1.38.0 ( 2019-09-23) running on x8664-unknown-linux-gnu :rustc 1.38.0 ( 2019-09-23) binary: rustc commit-hash: commit-date: 2019-09-23 host: x8664-unknown-linux-gnu release: 1.38.0 LLVM version: 9.0 Backtrace: thread 'rustc' panicked at 'index out of bounds: the len is 0 but the index is 0', stack backtrace: 0: backtrace::backtrace::libunwind::trace at 1: backtrace::backtrace::traceunsynchronized at 2: std::syscommon::backtrace::print at 3: std::syscommon::backtrace::print at 4: std::panicking::defaulthook::{{closure}} at 5: std::panicking::defaulthook at 6: rustc::util::common::panichook 7: std::panicking::rustpanicwithhook at 8: std::panicking::continuepanicfmt at 9: rustbeginunwind at 10: core::panicking::panicfmt at 11: core::panicking::panicboundscheck at 12: syntaxext::format::expandpreparsedformatargs 13: syntaxext::format::expandformatargsimpl 14: LayoutOf, TyLayout, LayoutError, HasTyCtxt, TargetDataLayout, HasDataLayout, LayoutOf, TyLayout, LayoutError, HasTyCtxt, TargetDataLayout, HasDataLayout, Size, }; use crate::rustc::ty::subst::Subst;", "commid": "rust_pr_66394.0"}], "negative_passages": []}
{"query_id": "q-en-rust-2550961ff8fac6dc683f0437cd61e5d9c38f1568f0f3c74668ce389a308f3278", "query": "A MSVC x8664 bors job has failed on two unrelated PRs (, ) with the : In both cases, attempted to allocate several GB of memory while compiling the crate. One of these failures was in the job, and the other was in . This failure appears to be spurious: a previous version of passed the same job.\nI wonder if we could get a backtrace or something from this allocation -- it seems... suspicious that we're only sometimes OOMing. It's also interesting to note that this is almost exactly 4 GB in a single allocation, which is surprising generally ( is not a big crate -- 300 lines or so, with comments etc).\nOn my PR it was a different (almost) power of two: edit In both cases, the size of the allocation is 8 bytes less than a power of two.\nSo I minimized this to: on a recent nightly gives: It looks like the compiler is trying to allocate enough space to store a value of the return type. As long as I use a large enough value it reproduces reliably and it isn't specific to Windows: . I bisected this to and I'm guessing (cc is the root cause.\nYes, that seems likely. I will take a look at this tonight.\nshould we put on hold while this is sorted out?\nYeah, let's do that just to be safe.\ntriage: P-high. Removing nomination label.", "positive_passages": [{"docid": "doc-en-rust-d6b7d04fee0f3cd2f61bf052e4b14100472a7de468aaaaaaf02479c332a63834", "text": "use crate::const_eval::error_to_const_error; use crate::transform::{MirPass, MirSource}; /// The maximum number of bytes that we'll allocate space for a return value. const MAX_ALLOC_LIMIT: u64 = 1024; pub struct ConstProp; impl<'tcx> MirPass<'tcx> for ConstProp {", "commid": "rust_pr_66394.0"}], "negative_passages": []}
{"query_id": "q-en-rust-2550961ff8fac6dc683f0437cd61e5d9c38f1568f0f3c74668ce389a308f3278", "query": "A MSVC x8664 bors job has failed on two unrelated PRs (, ) with the : In both cases, attempted to allocate several GB of memory while compiling the crate. One of these failures was in the job, and the other was in . This failure appears to be spurious: a previous version of passed the same job.\nI wonder if we could get a backtrace or something from this allocation -- it seems... suspicious that we're only sometimes OOMing. It's also interesting to note that this is almost exactly 4 GB in a single allocation, which is surprising generally ( is not a big crate -- 300 lines or so, with comments etc).\nOn my PR it was a different (almost) power of two: edit In both cases, the size of the allocation is 8 bytes less than a power of two.\nSo I minimized this to: on a recent nightly gives: It looks like the compiler is trying to allocate enough space to store a value of the return type. As long as I use a large enough value it reproduces reliably and it isn't specific to Windows: . I bisected this to and I'm guessing (cc is the root cause.\nYes, that seems likely. I will take a look at this tonight.\nshould we put on hold while this is sorted out?\nYeah, let's do that just to be safe.\ntriage: P-high. Removing nomination label.", "positive_passages": [{"docid": "doc-en-rust-c1d2797f5e0a912cd0d293cd0fdad1274dd743b2e1d0647c11eaf959138fd67f", "text": "ecx .layout_of(body.return_ty().subst(tcx, substs)) .ok() // Don't bother allocating memory for ZST types which have no values. .filter(|ret_layout| !ret_layout.is_zst()) // Don't bother allocating memory for ZST types which have no values // or for large values. .filter(|ret_layout| !ret_layout.is_zst() && ret_layout.size < Size::from_bytes(MAX_ALLOC_LIMIT)) .map(|ret_layout| ecx.allocate(ret_layout, MemoryKind::Stack)); ecx.push_stack_frame(", "commid": "rust_pr_66394.0"}], "negative_passages": []}
{"query_id": "q-en-rust-2550961ff8fac6dc683f0437cd61e5d9c38f1568f0f3c74668ce389a308f3278", "query": "A MSVC x8664 bors job has failed on two unrelated PRs (, ) with the : In both cases, attempted to allocate several GB of memory while compiling the crate. One of these failures was in the job, and the other was in . This failure appears to be spurious: a previous version of passed the same job.\nI wonder if we could get a backtrace or something from this allocation -- it seems... suspicious that we're only sometimes OOMing. It's also interesting to note that this is almost exactly 4 GB in a single allocation, which is surprising generally ( is not a big crate -- 300 lines or so, with comments etc).\nOn my PR it was a different (almost) power of two: edit In both cases, the size of the allocation is 8 bytes less than a power of two.\nSo I minimized this to: on a recent nightly gives: It looks like the compiler is trying to allocate enough space to store a value of the return type. As long as I use a large enough value it reproduces reliably and it isn't specific to Windows: . I bisected this to and I'm guessing (cc is the root cause.\nYes, that seems likely. I will take a look at this tonight.\nshould we put on hold while this is sorted out?\nYeah, let's do that just to be safe.\ntriage: P-high. Removing nomination label.", "positive_passages": [{"docid": "doc-en-rust-3abbdcdebdb75745c2635e225e6ef7e055fe3c77a62144d5c8909f50ccf27a98", "text": ") -> Option<()> { let span = source_info.span; // #66397: Don't try to eval into large places as that can cause an OOM if place_layout.size >= Size::from_bytes(MAX_ALLOC_LIMIT) { return None; } let overflow_check = self.tcx.sess.overflow_checks(); // Perform any special handling for specific Rvalue types.", "commid": "rust_pr_66394.0"}], "negative_passages": []}
{"query_id": "q-en-rust-2550961ff8fac6dc683f0437cd61e5d9c38f1568f0f3c74668ce389a308f3278", "query": "A MSVC x8664 bors job has failed on two unrelated PRs (, ) with the : In both cases, attempted to allocate several GB of memory while compiling the crate. One of these failures was in the job, and the other was in . This failure appears to be spurious: a previous version of passed the same job.\nI wonder if we could get a backtrace or something from this allocation -- it seems... suspicious that we're only sometimes OOMing. It's also interesting to note that this is almost exactly 4 GB in a single allocation, which is surprising generally ( is not a big crate -- 300 lines or so, with comments etc).\nOn my PR it was a different (almost) power of two: edit In both cases, the size of the allocation is 8 bytes less than a power of two.\nSo I minimized this to: on a recent nightly gives: It looks like the compiler is trying to allocate enough space to store a value of the return type. As long as I use a large enough value it reproduces reliably and it isn't specific to Windows: . I bisected this to and I'm guessing (cc is the root cause.\nYes, that seems likely. I will take a look at this tonight.\nshould we put on hold while this is sorted out?\nYeah, let's do that just to be safe.\ntriage: P-high. Removing nomination label.", "positive_passages": [{"docid": "doc-en-rust-ee49c55b0e6613225b8b877eaa75eeb5b5587617370a78996b0f956679259df0", "text": " // check-pass // only-x86_64 // Checks that the compiler does not actually try to allocate 4 TB during compilation and OOM crash. fn foo() -> [u8; 4 * 1024 * 1024 * 1024 * 1024] { unimplemented!() } fn main() { foo(); } ", "commid": "rust_pr_66394.0"}], "negative_passages": []}
{"query_id": "q-en-rust-2550961ff8fac6dc683f0437cd61e5d9c38f1568f0f3c74668ce389a308f3278", "query": "A MSVC x8664 bors job has failed on two unrelated PRs (, ) with the : In both cases, attempted to allocate several GB of memory while compiling the crate. One of these failures was in the job, and the other was in . This failure appears to be spurious: a previous version of passed the same job.\nI wonder if we could get a backtrace or something from this allocation -- it seems... suspicious that we're only sometimes OOMing. It's also interesting to note that this is almost exactly 4 GB in a single allocation, which is surprising generally ( is not a big crate -- 300 lines or so, with comments etc).\nOn my PR it was a different (almost) power of two: edit In both cases, the size of the allocation is 8 bytes less than a power of two.\nSo I minimized this to: on a recent nightly gives: It looks like the compiler is trying to allocate enough space to store a value of the return type. As long as I use a large enough value it reproduces reliably and it isn't specific to Windows: . I bisected this to and I'm guessing (cc is the root cause.\nYes, that seems likely. I will take a look at this tonight.\nshould we put on hold while this is sorted out?\nYeah, let's do that just to be safe.\ntriage: P-high. Removing nomination label.", "positive_passages": [{"docid": "doc-en-rust-9faf1e6ca6dfca36947b5e6bdc946d39a150daadbc8d8960bbcb4640021077db", "text": " // check-pass // only-x86_64 // Checks that the compiler does not actually try to allocate 4 TB during compilation and OOM crash. fn main() { [0; 4 * 1024 * 1024 * 1024 * 1024]; } ", "commid": "rust_pr_66394.0"}], "negative_passages": []}
{"query_id": "q-en-rust-e4dadc355bd87a465136e53a4eafefe0ad047116c4141b2562c0b5a0123bb91a", "query": "When using zipped iterators in a certain section of code, I hit an ICE. I've attached the project source -- sorry I could not produce a minimum example though I tried. Also, it's closed source, though I own it so can choose to provide the snapshot. Symptoms: Normally using zipped iterators doesn't cause an ICE, but in this case it is consistent (clean build, using , and ). Not zipping the iterators, and using prevents the ICE from occurring. This code fails to compile: This code compiles: Compilation error message: : 1.39 (stable): 1.40 (nightly): Backtrace: if let Err(mut errors) = self.fulfillment_cx.borrow_mut().select_where_possible(self) { let result = self.fulfillment_cx.borrow_mut().select_where_possible(self); if let Err(mut errors) = result { mutate_fullfillment_errors(&mut errors); self.report_fulfillment_errors(&errors, self.inh.body_id, fallback_has_occurred); }", "commid": "rust_pr_66388.0"}], "negative_passages": []}
{"query_id": "q-en-rust-e4dadc355bd87a465136e53a4eafefe0ad047116c4141b2562c0b5a0123bb91a", "query": "When using zipped iterators in a certain section of code, I hit an ICE. I've attached the project source -- sorry I could not produce a minimum example though I tried. Also, it's closed source, though I own it so can choose to provide the snapshot. Symptoms: Normally using zipped iterators doesn't cause an ICE, but in this case it is consistent (clean build, using , and ). Not zipping the iterators, and using prevents the ICE from occurring. This code fails to compile: This code compiles: Compilation error message: : 1.39 (stable): 1.40 (nightly): Backtrace: Context { waker, _marker: PhantomData } Context { waker, _marker: PhantomData, _marker2: PhantomData } } /// Returns a reference to the [`Waker`] for the current task.", "commid": "rust_pr_95985.0"}], "negative_passages": []}
{"query_id": "q-en-rust-d71b16d4607174d57ba73e9d95f06f2b41e1ab72932f98df17b457c3b78e0dad", "query": "One issue that came up in the discussion is that the type implements . This might have been introduced accidentally. probably does not matter at all, given that users will only observe a by reference. However has the impliciation that we will not be able to add non thread-safe methods to - e.g. in order to optimize thread-local wake-ups again. It might be interesting to see whether and support could be removed from the type. Unfortunately that is however a breaking change - even though it is not likely that any code currently uses out of the direct path. In a similar fashion it had been observed in the implementation of () that the type is also and . While it had been expected for - given that s are used to wake-up tasks from different threads, it might not have been for . One downside of s being is that it prevents optimizations. E.g. in linked ticket the original implementation contained an optimization that while a was not d (and thereby equipped with a different vtable) it could wake-up the local eventloop again by just setting a non-synchronized boolean flag in the current thread. However given that could be transferred to another thread, and called from there even within the context - this optmization is invalid. Here it would also be interesting if the requirement could be removed. I expect the amount of real-world usages to be in the same ballpark as sending across threads - hopefully 0. But it's again a breaking change cc , , ,\nEven if this weren't a breaking change, I'd want to see some really good evidence these optimizations would matter for real world code before choosing to make these types non-. I don't think getting rid of an atomic update in is a compelling example. Basically, in my view the current API is intentional and correct.\nwas introduced to leave room for future unforeseen requirements, and being restricts its usefulness as such, which is unfortunate. I can see a case for since we have , but it's difficult to imagine a case where it makes sense to be accessing a single from multiple threads concurrently.\nI have an example of where the current API is resulting in performance issues: I am trying to implement a combined Executor/Reactor around a winit event loop. To wake up the event loop from a different thread we have to post a message to the event loop which can be a non-trivial operation. But on the same thread, we know that we don't need to wake up the eventloop, so we can use much cheaper mechanisms like an atomic bool.\nBut that would be true if wakers were just also, which was an intentional part of the simplification from the previous design.\nI think the point is that being precludes the otherwise noninvasive restoration of a more efficient through an additional method on .\nI don't think it should be the judgement of the libs team (or any of us) to determine whether something is good enough and not needs to be further optimized for any use-case. Rusts goal as a language is to enable zero cost abstractions. This requires in my opinion to not be opinionated on any design choices which have a performance/footprint impact. I think some of the decisions that have been taken in the async/await world however are opinionated, and will have an impact on use-cases that currently are still better implemented with different mechanisms. I do not really want to elaborate on anything more particular, because I fear this would bring the discussion back to a arguing whether any X% performance degradation is still good enough. That is an OK decision to have for a software project whichs needs to decide whether Rusts async/await support is good enough for them, and whether they have to apply workarounds or not. But it's not a discussion which will move the issue here any more forward, because for yet another project the outcoming of the discussion might be different. PS: I do think it's okay and good if libraries like tokio or async-std are being more opinionated about what threading models they support, and they might sacrifice performance gains in rare usage scenarios for easy of use. But and language features are different, and we expect people to use them also for niche use-cases (e.g. there is certainly a lot of cool evaluation going on with the use of async/await in embedded contexts or kernels - where requirements might be very different than for a webserver based on tokio).\nThis is a trade off: Context is either Sync or it isn't, some users benefit from one choice (they can use references to Context as threadsafe) and some users benefit from the other choice (they can have nonthreadsafe constructs inside the Context construct). Ultimately the libs team has to decide one way or the other on these points in the API where there is a trade off between thread safety and not. However, this decision has already been made and it would be a breaking change to change it, so this discussion is moot.\nDo you have an example where this is actually done within the ecosystem, or a use-case for it? As far as I can tell, this would require spawning a thread using something like and capturing the from within a ?\nA common way a user could depend on a type being is to store it in an and send it across threads. This isn't very likely for Context but it is a very reasonable pattern for .\nI think its a fair point that we didnt intentionally make context send and sync, and that this precludes adding more fields to context that are not send and sync, and that since context is just a buffer for future changes, it would probably have been better to make it non-threadsafe. But now its a breaking change. If crater showed no regressions and there's no known patterns it nullifies, I would be open to changing this about Context personally, but I am somewhat doubtful the libs team as a whole would approve making a breaking change to support hypothetical future extensions. If anyone on the thread wants to pursue a change to context (not waker), they should get crater results as the next step of the conversation.\nThere is no discussion about being or not - it obviously has to be. The question is purely about . And in order to do what you describe you have to store an and send it somewhere. Which is very doubtful to happen, due to the not very useful lifetime and due to already being an like thing internally. As mentioned, the most likely way to see this behavior is some scoped thread API being used inside - or people doing some very advanced synchronization against an IO driver running in another thread. For all those there are certainly better ways than to rely on being . E.g. to return a flag from the synchronized section and call from the original thread when done. Or to and send the if there is doubt whether it needs to be persisted somewhere else. And yes, the impact on is even bigger. It was meant as an extension point. But we can not add any functions to it which are not thread-safe. E.g. if we want to add methods which spawn another task on the same thread as the current executor thread, and which allows for spawning futures (like Tokios ) - we would have an issue.\nWhy would these be \"certainly better\" than just calling from the scoped threads?\nTo be a little more expansive: the scoped threadpool you're talking about could very well not be scoped inside a poll method - rather, the waker could be cloned once and then owned by one thread and referenced by many other scoped threads in some sort of threadpool construct for CPU bound tasks. This seems like a perfectly valid implementation which allows you to divide up the work among many threads without cloning the waker many times. This is potentially an optimization. I don't think this optimization is very important, but I don't think the optimizations allowed by making waker are very important either. The point is that there's a trade off between the optimizations allowed by assuming references to wakers can be shared across threads and the optimizations allowed by assuming they can't be, its not the case that one side is inherently the \"zero cost\" side.\nWe discussed this at the recent Libs meeting and felt that deferring to would make sense here.\nThough I don't have the bandwidth to carry this, this definitely seems interesting. Seeing projects such as and closed-source initiatives lead me to believe that there may actually be a decent case to enable some form of -to avoid the synchronization overhead on single-threaded runtimes. I don't know if we should make this a priority, but at least we probably shouldn't close it just yet.\nI think bringing back would be a nice additional improvement. But for the moment it would be nice to just fix the general sync-ness, which blocks all other fixes and improvements.\nAs the person who originally introduced , it was very much a conscious decision to get rid of it and to make and . This was called out explicitly .\nThe cited discussion in the RFC justifies making Send, and predates the decision to (re)introduce . Making doesn't defeat the objectives discussed there. In particular, it does not impact ergonomics for the common case at all.\nThis was recently closed, however the issue mentions both and . The recent change only affects . There is good reason that is , however after reading this issue I'm unsure that there are good reasons why is .\n(NOT A CONTRIBUTION) Waker supports wakebyref, and so it is possible to pass to another thread and wake it from that thread. This functionality would not be possible without Waker being Sync. Supporting wakers that are either not Send or not Sync will best be done by adding new APIs to Context and a new LocalWaker type.\nForgive me the naive question, but how does a type work with ? As I understand, requires , as a concrete type not a trait, so this would also require a trait? At this point we have two completely incompatible async systems... Which is only useful if is not cheap to clone, and has the burden of another lifetime bound. It surprises me that the docs don't say anything about the intended cost of .\n(NOT A CONTRIBUTION) Future::poll does not take Waker as an argument, Future::poll takes Context. Context could have the ability to set a LocalWaker, so that an executor could set this. Reactors which operate on the same thread as the future that polls them could migrate to using the LocalWaker argument instead of Waker. Here is a pre-RFC that someone wrote with a possible API: Commentary like this is both factually wrong and not helpful for the mood of the thread.", "positive_passages": [{"docid": "doc-en-rust-876c353a5ebd9d6d5e46e7e1ae0e48bd6ecff32a9de5d4f6df58b8bbe1c0015e", "text": " use core::task::{Context, Poll, RawWaker, RawWakerVTable, Waker}; use core::task::{Poll, RawWaker, RawWakerVTable, Waker}; #[test] fn poll_const() {", "commid": "rust_pr_95985.0"}], "negative_passages": []}
{"query_id": "q-en-rust-d71b16d4607174d57ba73e9d95f06f2b41e1ab72932f98df17b457c3b78e0dad", "query": "One issue that came up in the discussion is that the type implements . This might have been introduced accidentally. probably does not matter at all, given that users will only observe a by reference. However has the impliciation that we will not be able to add non thread-safe methods to - e.g. in order to optimize thread-local wake-ups again. It might be interesting to see whether and support could be removed from the type. Unfortunately that is however a breaking change - even though it is not likely that any code currently uses out of the direct path. In a similar fashion it had been observed in the implementation of () that the type is also and . While it had been expected for - given that s are used to wake-up tasks from different threads, it might not have been for . One downside of s being is that it prevents optimizations. E.g. in linked ticket the original implementation contained an optimization that while a was not d (and thereby equipped with a different vtable) it could wake-up the local eventloop again by just setting a non-synchronized boolean flag in the current thread. However given that could be transferred to another thread, and called from there even within the context - this optmization is invalid. Here it would also be interesting if the requirement could be removed. I expect the amount of real-world usages to be in the same ballpark as sending across threads - hopefully 0. But it's again a breaking change cc , , ,\nEven if this weren't a breaking change, I'd want to see some really good evidence these optimizations would matter for real world code before choosing to make these types non-. I don't think getting rid of an atomic update in is a compelling example. Basically, in my view the current API is intentional and correct.\nwas introduced to leave room for future unforeseen requirements, and being restricts its usefulness as such, which is unfortunate. I can see a case for since we have , but it's difficult to imagine a case where it makes sense to be accessing a single from multiple threads concurrently.\nI have an example of where the current API is resulting in performance issues: I am trying to implement a combined Executor/Reactor around a winit event loop. To wake up the event loop from a different thread we have to post a message to the event loop which can be a non-trivial operation. But on the same thread, we know that we don't need to wake up the eventloop, so we can use much cheaper mechanisms like an atomic bool.\nBut that would be true if wakers were just also, which was an intentional part of the simplification from the previous design.\nI think the point is that being precludes the otherwise noninvasive restoration of a more efficient through an additional method on .\nI don't think it should be the judgement of the libs team (or any of us) to determine whether something is good enough and not needs to be further optimized for any use-case. Rusts goal as a language is to enable zero cost abstractions. This requires in my opinion to not be opinionated on any design choices which have a performance/footprint impact. I think some of the decisions that have been taken in the async/await world however are opinionated, and will have an impact on use-cases that currently are still better implemented with different mechanisms. I do not really want to elaborate on anything more particular, because I fear this would bring the discussion back to a arguing whether any X% performance degradation is still good enough. That is an OK decision to have for a software project whichs needs to decide whether Rusts async/await support is good enough for them, and whether they have to apply workarounds or not. But it's not a discussion which will move the issue here any more forward, because for yet another project the outcoming of the discussion might be different. PS: I do think it's okay and good if libraries like tokio or async-std are being more opinionated about what threading models they support, and they might sacrifice performance gains in rare usage scenarios for easy of use. But and language features are different, and we expect people to use them also for niche use-cases (e.g. there is certainly a lot of cool evaluation going on with the use of async/await in embedded contexts or kernels - where requirements might be very different than for a webserver based on tokio).\nThis is a trade off: Context is either Sync or it isn't, some users benefit from one choice (they can use references to Context as threadsafe) and some users benefit from the other choice (they can have nonthreadsafe constructs inside the Context construct). Ultimately the libs team has to decide one way or the other on these points in the API where there is a trade off between thread safety and not. However, this decision has already been made and it would be a breaking change to change it, so this discussion is moot.\nDo you have an example where this is actually done within the ecosystem, or a use-case for it? As far as I can tell, this would require spawning a thread using something like and capturing the from within a ?\nA common way a user could depend on a type being is to store it in an and send it across threads. This isn't very likely for Context but it is a very reasonable pattern for .\nI think its a fair point that we didnt intentionally make context send and sync, and that this precludes adding more fields to context that are not send and sync, and that since context is just a buffer for future changes, it would probably have been better to make it non-threadsafe. But now its a breaking change. If crater showed no regressions and there's no known patterns it nullifies, I would be open to changing this about Context personally, but I am somewhat doubtful the libs team as a whole would approve making a breaking change to support hypothetical future extensions. If anyone on the thread wants to pursue a change to context (not waker), they should get crater results as the next step of the conversation.\nThere is no discussion about being or not - it obviously has to be. The question is purely about . And in order to do what you describe you have to store an and send it somewhere. Which is very doubtful to happen, due to the not very useful lifetime and due to already being an like thing internally. As mentioned, the most likely way to see this behavior is some scoped thread API being used inside - or people doing some very advanced synchronization against an IO driver running in another thread. For all those there are certainly better ways than to rely on being . E.g. to return a flag from the synchronized section and call from the original thread when done. Or to and send the if there is doubt whether it needs to be persisted somewhere else. And yes, the impact on is even bigger. It was meant as an extension point. But we can not add any functions to it which are not thread-safe. E.g. if we want to add methods which spawn another task on the same thread as the current executor thread, and which allows for spawning futures (like Tokios ) - we would have an issue.\nWhy would these be \"certainly better\" than just calling from the scoped threads?\nTo be a little more expansive: the scoped threadpool you're talking about could very well not be scoped inside a poll method - rather, the waker could be cloned once and then owned by one thread and referenced by many other scoped threads in some sort of threadpool construct for CPU bound tasks. This seems like a perfectly valid implementation which allows you to divide up the work among many threads without cloning the waker many times. This is potentially an optimization. I don't think this optimization is very important, but I don't think the optimizations allowed by making waker are very important either. The point is that there's a trade off between the optimizations allowed by assuming references to wakers can be shared across threads and the optimizations allowed by assuming they can't be, its not the case that one side is inherently the \"zero cost\" side.\nWe discussed this at the recent Libs meeting and felt that deferring to would make sense here.\nThough I don't have the bandwidth to carry this, this definitely seems interesting. Seeing projects such as and closed-source initiatives lead me to believe that there may actually be a decent case to enable some form of -to avoid the synchronization overhead on single-threaded runtimes. I don't know if we should make this a priority, but at least we probably shouldn't close it just yet.\nI think bringing back would be a nice additional improvement. But for the moment it would be nice to just fix the general sync-ness, which blocks all other fixes and improvements.\nAs the person who originally introduced , it was very much a conscious decision to get rid of it and to make and . This was called out explicitly .\nThe cited discussion in the RFC justifies making Send, and predates the decision to (re)introduce . Making doesn't defeat the objectives discussed there. In particular, it does not impact ergonomics for the common case at all.\nThis was recently closed, however the issue mentions both and . The recent change only affects . There is good reason that is , however after reading this issue I'm unsure that there are good reasons why is .\n(NOT A CONTRIBUTION) Waker supports wakebyref, and so it is possible to pass to another thread and wake it from that thread. This functionality would not be possible without Waker being Sync. Supporting wakers that are either not Send or not Sync will best be done by adding new APIs to Context and a new LocalWaker type.\nForgive me the naive question, but how does a type work with ? As I understand, requires , as a concrete type not a trait, so this would also require a trait? At this point we have two completely incompatible async systems... Which is only useful if is not cheap to clone, and has the burden of another lifetime bound. It surprises me that the docs don't say anything about the intended cost of .\n(NOT A CONTRIBUTION) Future::poll does not take Waker as an argument, Future::poll takes Context. Context could have the ability to set a LocalWaker, so that an executor could set this. Reactors which operate on the same thread as the future that polls them could migrate to using the LocalWaker argument instead of Waker. Here is a pre-RFC that someone wrote with a possible API: Commentary like this is both factually wrong and not helpful for the mood of the thread.", "positive_passages": [{"docid": "doc-en-rust-7d0dcfed4463ef882f27def797166c874a42323686ed9cb2c439f6f82ca71d80", "text": "static WAKER: Waker = unsafe { Waker::from_raw(VOID_WAKER) }; static CONTEXT: Context<'static> = Context::from_waker(&WAKER); static WAKER_REF: &'static Waker = CONTEXT.waker(); WAKER_REF.wake_by_ref(); WAKER.wake_by_ref(); }", "commid": "rust_pr_95985.0"}], "negative_passages": []}
{"query_id": "q-en-rust-7f533f7f6641406ed74aea22c264ad9b9554f9fdc560a3dc927fc1c0830c750c", "query": "This works; let's consider adding a test for it since it's a bit of a special case. Reported to be working in by\ndo you mind if I add this? My laptop is no longer broken ^^ so I can actually get some work done without disappearing for a week at a time. I put this in a file called and ran , is that about the right approach?\nIt is, but I'd recommend using to make it go faster. Also use to just test that file. Also, I'd rename the test to not suggest ... something like .\nI ran about an hour ago and that worked fine (although it took a while ^^). However, since then I updated nightly and I'm now getting compile errors when building stage 1: What am I doing wrong? Full error: (cross posted from https://rust-)", "positive_passages": [{"docid": "doc-en-rust-37cd077b4a10670ee5b5aedb786861c29994e74419aa9e68e108268c98b6d627", "text": " // check-pass #![feature(const_if_match)] enum E { A, B, C } const fn f(e: E) -> usize { match e { _ => 0 } } fn main() { const X: usize = f(E::C); assert_eq!(X, 0); assert_eq!(f(E::A), 0); } ", "commid": "rust_pr_66786.0"}], "negative_passages": []}
{"query_id": "q-en-rust-01d2a6e0236cf8f43cd09d26f30dba7a41be7bb417992c9cbd42d0737301118d", "query": "I'm trying to build my crate on nightly, and I'm getting a compiler panic. I bisected several nightly builds to try and narrow down when the problem started. The last known good nightly was , and the first failing was The panic also occurs in . The crates causing the panic are available here:\ncc\ntriage: P-high, removing nomination label.\nMinimal repro:\nAs some info, I've narrowed down the offending commit to \u2013\u2013 most likely the changes in .", "positive_passages": [{"docid": "doc-en-rust-ea431610114ba5dccf9f4e91c5d7f9dd25f5b30f295f08a7e97e1cd93977e0f2", "text": "variant_index: VariantIdx, dest: PlaceTy<'tcx, M::PointerTag>, ) -> InterpResult<'tcx> { let variant_scalar = Scalar::from_u32(variant_index.as_u32()).into(); // Layout computation excludes uninhabited variants from consideration // therefore there's no way to represent those variants in the given layout. if dest.layout.for_variant(self, variant_index).abi.is_uninhabited() { throw_ub!(Unreachable); } match dest.layout.variants { layout::Variants::Single { index } => { if index != variant_index { throw_ub!(InvalidDiscriminant(variant_scalar)); } assert_eq!(index, variant_index); } layout::Variants::Multiple { discr_kind: layout::DiscriminantKind::Tag,", "commid": "rust_pr_66960.0"}], "negative_passages": []}
{"query_id": "q-en-rust-01d2a6e0236cf8f43cd09d26f30dba7a41be7bb417992c9cbd42d0737301118d", "query": "I'm trying to build my crate on nightly, and I'm getting a compiler panic. I bisected several nightly builds to try and narrow down when the problem started. The last known good nightly was , and the first failing was The panic also occurs in . The crates causing the panic are available here:\ncc\ntriage: P-high, removing nomination label.\nMinimal repro:\nAs some info, I've narrowed down the offending commit to \u2013\u2013 most likely the changes in .", "positive_passages": [{"docid": "doc-en-rust-72e60209aeb8c12608cd98adfeb26a27dab5fa303fda88fc621f36621fde0452", "text": "discr_index, .. } => { if !dest.layout.ty.variant_range(*self.tcx).unwrap().contains(&variant_index) { throw_ub!(InvalidDiscriminant(variant_scalar)); } // No need to validate that the discriminant here because the // `TyLayout::for_variant()` call earlier already checks the variant is valid. let discr_val = dest.layout.ty.discriminant_for_variant(*self.tcx, variant_index).unwrap().val;", "commid": "rust_pr_66960.0"}], "negative_passages": []}
{"query_id": "q-en-rust-01d2a6e0236cf8f43cd09d26f30dba7a41be7bb417992c9cbd42d0737301118d", "query": "I'm trying to build my crate on nightly, and I'm getting a compiler panic. I bisected several nightly builds to try and narrow down when the problem started. The last known good nightly was , and the first failing was The panic also occurs in . The crates causing the panic are available here:\ncc\ntriage: P-high, removing nomination label.\nMinimal repro:\nAs some info, I've narrowed down the offending commit to \u2013\u2013 most likely the changes in .", "positive_passages": [{"docid": "doc-en-rust-fedbed21ca0fcd8a211a5cdb0109e51e2b4e21fe0c8dd1eab26daf9cfbcfe8ad", "text": "discr_index, .. } => { if !variant_index.as_usize() < dest.layout.ty.ty_adt_def().unwrap().variants.len() { throw_ub!(InvalidDiscriminant(variant_scalar)); } // No need to validate that the discriminant here because the // `TyLayout::for_variant()` call earlier already checks the variant is valid. if variant_index != dataful_variant { let variants_start = niche_variants.start().as_u32(); let variant_index_relative = variant_index.as_u32()", "commid": "rust_pr_66960.0"}], "negative_passages": []}
{"query_id": "q-en-rust-01d2a6e0236cf8f43cd09d26f30dba7a41be7bb417992c9cbd42d0737301118d", "query": "I'm trying to build my crate on nightly, and I'm getting a compiler panic. I bisected several nightly builds to try and narrow down when the problem started. The last known good nightly was , and the first failing was The panic also occurs in . The crates causing the panic are available here:\ncc\ntriage: P-high, removing nomination label.\nMinimal repro:\nAs some info, I've narrowed down the offending commit to \u2013\u2013 most likely the changes in .", "positive_passages": [{"docid": "doc-en-rust-03704bbf2f682f0a4fbcb6335d6aa1e284ca6d1487ca81a289948d40340397f7", "text": " // build-pass // compile-flags: --crate-type lib // Regression test for ICE which occurred when const propagating an enum with three variants // one of which is uninhabited. pub enum ApiError {} #[allow(dead_code)] pub struct TokioError { b: bool, } pub enum Error { Api { source: ApiError, }, Ethereum, Tokio { source: TokioError, }, } struct Api; impl IntoError #[unstable(feature = \"float_approx_unchecked_to\", issue = \"67058\")] #[unstable(feature = \"convert_float_to_int\", issue = \"67057\")] #[doc(hidden)] unsafe fn approx_unchecked(self) -> Int; unsafe fn to_int_unchecked(self) -> Int; } macro_rules! impl_float_to_int {", "commid": "rust_pr_70487.0"}], "negative_passages": []}
{"query_id": "q-en-rust-6cf86ef85e3f58463df2f0afa844964d29923613a7cb40d6acfd7169cf886946", "query": "First discussed in issue As of Rust 1.39, casting a floating point number to an integer with is Undefined Behavior if the value is out of range. fixes this soundness hole by making \u201csaturate\u201d to the maximum or minimum value of the integer type (or zero for ), but has measurable negative performance impact in some benchmarks. There is some consensus in that thread for enabling saturation by default anyway, but provide an alternative for users who know through some other mean that their values are in range. PR adds that method to each of and . /// #![feature(floatapproxuncheckedto)] /// /// let value = 4.6f32; /// let rounded = unsafe { value.approxuncheckedto::