{"query_id": "q-en-kubernetes-0000ac51c1276b4740fb122d9b802bdadd2033eee75dec85386e4ab3896771a8", "query": "} factory.CreateFromConfig(policy) hpa := factory.GetHardPodAffinitySymmetricWeight() if hpa != v1.DefaultHardPodAffinitySymmetricWeight { t.Errorf(\"Wrong hardPodAffinitySymmetricWeight, ecpected: %d, got: %d\", v1.DefaultHardPodAffinitySymmetricWeight, hpa) } } func TestCreateFromConfigWithHardPodAffinitySymmetricWeight(t *testing.T) { var configData []byte var policy schedulerapi.Policy handler := utiltesting.FakeHandler{ StatusCode: 500, ResponseBody: \"\", T: t, } server := httptest.NewServer(&handler) defer server.Close() client := clientset.NewForConfigOrDie(&restclient.Config{Host: server.URL, ContentConfig: restclient.ContentConfig{GroupVersion: &api.Registry.GroupOrDie(v1.GroupName).GroupVersion}}) informerFactory := informers.NewSharedInformerFactory(client, 0) factory := NewConfigFactory( v1.DefaultSchedulerName, client, informerFactory.Core().V1().Nodes(), informerFactory.Core().V1().Pods(), informerFactory.Core().V1().PersistentVolumes(), informerFactory.Core().V1().PersistentVolumeClaims(), informerFactory.Core().V1().ReplicationControllers(), informerFactory.Extensions().V1beta1().ReplicaSets(), informerFactory.Apps().V1beta1().StatefulSets(), informerFactory.Core().V1().Services(), v1.DefaultHardPodAffinitySymmetricWeight, ) // Pre-register some predicate and priority functions RegisterFitPredicate(\"PredicateOne\", PredicateOne) RegisterFitPredicate(\"PredicateTwo\", PredicateTwo) RegisterPriorityFunction(\"PriorityOne\", PriorityOne, 1) RegisterPriorityFunction(\"PriorityTwo\", PriorityTwo, 1) configData = []byte(`{ \"kind\" : \"Policy\", \"apiVersion\" : \"v1\", \"predicates\" : [ {\"name\" : \"TestZoneAffinity\", \"argument\" : {\"serviceAffinity\" : {\"labels\" : [\"zone\"]}}}, {\"name\" : \"TestRequireZone\", \"argument\" : {\"labelsPresence\" : {\"labels\" : [\"zone\"], \"presence\" : true}}}, {\"name\" : \"PredicateOne\"}, {\"name\" : \"PredicateTwo\"} ], \"priorities\" : [ {\"name\" : \"RackSpread\", \"weight\" : 3, \"argument\" : {\"serviceAntiAffinity\" : {\"label\" : \"rack\"}}}, {\"name\" : \"PriorityOne\", \"weight\" : 2}, {\"name\" : \"PriorityTwo\", \"weight\" : 1} ], \"hardPodAffinitySymmetricWeight\" : 10 }`) if err := runtime.DecodeInto(latestschedulerapi.Codec, configData, &policy); err != nil { t.Errorf(\"Invalid configuration: %v\", err) } factory.CreateFromConfig(policy) hpa := factory.GetHardPodAffinitySymmetricWeight() if hpa != 10 { t.Errorf(\"Wrong hardPodAffinitySymmetricWeight, ecpected: %d, got: %d\", 10, hpa) } } func TestCreateFromEmptyConfig(t *testing.T) {", "positive_passages": [{"docid": "doc-en-kubernetes-88466bc637a805045217da94feabbdbd3df0ba52cd0d9904e95d3984c4cf4efa", "text": "( noticed this) HardPodAffinitySymmetricWeight should be in the , not in KubeSchedulerConfiguration If/when fixing this, we need to be careful to do it in a backward-compatilble way.\nDo you mean ? It seems scheduler policy config's args can not passed to FitPredicates; it need to update related interface. For the : mark to be deprecated and set scheduler policy config to HardPodAffinitySymmetricWeight if policy config is empty.\nAre you working on this? if no, I can take it.\n, nop; please go ahead.\n/ / / , When reviewing the PR, there's one question on : the default value of is 1, which means the weight (1) will always add to priorities if term matched (), does it our expectation? And the user has to set it to 0 to disable it. IMO, the default value of should be 0.\nThe value 1 (instead of 0) was chosen so that if you say A must run with B, there is an automatic preference that B should run with A. This seems like reasonable behavior.\nThanks very much; reasonable case to me. But it seems we can not disable it: if set to 0, the default funcs will set it to default (1); similar result to un-set. Anyway, no user complain this now :).", "commid": "kubernetes_issue_43845", "tokennum": 306}], "negative_passages": []}
{"query_id": "q-en-kubernetes-002b4e03797943125ecc2bf5697fa0fc18e6137434144c1a6bbfcefbb129a789", "query": "opName := op.Name return wait.Poll(operationPollInterval, operationPollTimeoutDuration, func() (bool, error) { start := time.Now() gce.operationPollRateLimiter.Accept() duration := time.Now().Sub(start) if duration > 5*time.Second { glog.Infof(\"pollOperation: waited %v for %v\", duration, opName)", "positive_passages": [{"docid": "doc-en-kubernetes-599cbd9914ac52a34ee08ac617d1f4bd774f52f2d5bdcb07fa1a0e76f7194545", "text": "On a large cluster, the routecontroller basically makes O(n) requests all at once, and most of them error out with \"Rate Limit Exceeded\". We then wait for operations to complete on the ones that did stick, and damn the rest, they'll be caught next reconcile (which is a while). In fact, in some cases it appears we're spamming hard enough to get rate limited by the upper-level API DOS protections and seeing something akin to , e.g.: We can do better than this!\nc.f. , which is the other leg of this problem\nI took a look into code and it seems that we are doing full reconciliation every 10s. If we need to create 2000 routes at the beginning it seems to frequently. I think that we need to: first compute how many routes we have to create based on that try to spread them over some time to reduce possibility of causing 'rate limit exceeded' I will try to put some small PR together as a starting point.\nI'm reopening this one, since this doesn't seem to be solved. So basically something strange is happening here. With PR we have 10/s limit on API calls send to GCE. And we are still rate-limitted. However, according to documentation we are allowed for 20/s: So something either doesn't work or is weird.\nWhat is super interesting from the logs of recent run: It starts to creating routes: After 2 minutes from that, we are hitting \"Rate limit Exceeded\" for the first time: And only after 9 minutes we create the first route ever. The second one is created after 15 minutes, and from that point we create a number of them So it seems that nothing useful is happening for the first 2 minutes, and then between 3m30 and 9m30\nOK - so I think that what is happening is that all controllers are sharing GCE interface (and thuse they have common throttling). So if one controller is generating a lot of API calls, then other controllers may be throttled. has a hypothesis that it may be caused by nodecontroller.\nOne thing my PR didn't address (because it was late): Now we have some obnoxious differences in logging when you go to make a request, because the versus could actually be absorbing a fair amount of ratelimiter.", "commid": "kubernetes_issue_26119", "tokennum": 488}], "negative_passages": []}
{"query_id": "q-en-kubernetes-002b4e03797943125ecc2bf5697fa0fc18e6137434144c1a6bbfcefbb129a789", "query": "opName := op.Name return wait.Poll(operationPollInterval, operationPollTimeoutDuration, func() (bool, error) { start := time.Now() gce.operationPollRateLimiter.Accept() duration := time.Now().Sub(start) if duration > 5*time.Second { glog.Infof(\"pollOperation: waited %v for %v\", duration, opName)", "positive_passages": [{"docid": "doc-en-kubernetes-bf24c8733a67a958496376416be2eb636c6e4bfb94999815173220461c93fd0c", "text": "It could before because of the operation rate limiter and operation polling anyways, but now could be worse.\n- yes I'm aware of that. And probably this is exactly the case here.\nI got fooled by that - I guess we should fix it somehow.\nOne approach to fixing it is a similar approach to the operation polling code, but in the : warn if we ratelimited for more than N seconds (and dump the request or something? not clear how to clarify what we were sending).\nI found something else - Kubelet is also building GCE interface: We have 2000 kubelets, so if all of them send multiple requests to GCE, we may hit limits.\nAny idea in what circumstances Kubelet will contact cloud provider?\nHmm - it seems it contacts GCE exactly once at the beginning. So it doesn't explain much.\nI think it's only here: I've rarely seen GCE ratelimit readonly calls, in practice, and kubelets come up pretty well staggered.\nOK - so with logs that I , it is pretty clear that we are very heavily throttled on GCE api calls. In particular, when I was running 1000-node cluster, I see lines where we were throttle for 1m30. I'm pretty sure it's significantly higher in 2000-node cluster. I'm going to investigate a bit more where all those reuqests come from.\nActually - it seems that we have a pretty big problem here. Basically, Kubelet seems to send a lot of requests to GCE. So to clarify: every time Kubelet is updating NodeStatus (so every 10s by default) it is sending an API call to GCE to get it's not addresses: that means, that if we have 2000 nodes, kubelets generate 200 QPS to GCE (which is significantly more than we can afford) We should think how to solve this issue, but I think the options are: increasing QPS quota or calling GCE only once per X statusUpdates.\nThe second issue is that we are throttled at the controller level too, and this one I'm tryin to understand right now.", "commid": "kubernetes_issue_26119", "tokennum": 457}], "negative_passages": []}
{"query_id": "q-en-kubernetes-002b4e03797943125ecc2bf5697fa0fc18e6137434144c1a6bbfcefbb129a789", "query": "opName := op.Name return wait.Poll(operationPollInterval, operationPollTimeoutDuration, func() (bool, error) { start := time.Now() gce.operationPollRateLimiter.Accept() duration := time.Now().Sub(start) if duration > 5*time.Second { glog.Infof(\"pollOperation: waited %v for %v\", duration, opName)", "positive_passages": [{"docid": "doc-en-kubernetes-b1dad1b374e4c58d925d9fef62d9846d5e0e5eda04aecdd78a124036a36cf31c", "text": "- that said, any rate-limitting on our side, will not solve the kubelet-related problem\nHmm - actually looking into implementation it seems that NodeAddresses are only contacting metadata server, so I'm not longer that convinced...\nAlso, I think that important thing is that a single call to GCE API can translate to multiple api calls to GCE. In particular - CreateRoute translates to: get instance insert route a bunch of getOperation (until this is finished)\nOK - so I think that what is happening here is that since CreateRoute translates to: get instance insert route a bunch of getOperations That means that if we call say 2000 CreateRoute() at the same time, all of them will try to issue: getInstance at the beginning. So we will quickly accumulate 2000 GET requests in the queue. Once they are processed, we generate the \"POST route\" then for any processed. So It's kind of expected what is happening here. So getting back to the Zach`s PR - I think that throttling POST and GET and OPERATION calls separate is kind of good idea in general.\nBut I still don't really understand why do we get \"RateLimitExceeded\". Is that because of Kubelets?\nI think that one problem we have at the RouteController level is that if CreateRoute fails, we simply don't react on it. And we will wait at least 5 minutes before even retrying it. We should add some retries at the route controller level too. I will send out some short PR for it.\nBut I still don't understand why do we get those \"RateLimitExceeded\" error\nOK - so as an update. In GCE, we have a separate rate-limit for number of in-flight CreateRoute call per project. is supposed to address this issue.\nOK - so this one is actually fixed. I mean that it is still long, but currently it's purely blocked on GCE.\nAs a comment - we have an internal bug for it already.", "commid": "kubernetes_issue_26119", "tokennum": 438}], "negative_passages": []}
{"query_id": "q-en-kubernetes-002bc89728cd2e5146319758d63519449dcadeb2768732643dcf899c0c6c6e58", "query": "} if err := proxyHandler.ServeConn(conn); err != nil && !shoulderror { // If the connection request is closed before the channel is closed // the test will fail with a ServeConn error. Since the test only return // early if expects shouldError=true, the channel is closed at the end of // the test, just before all the deferred connections Close() are executed. if isClosed() { return }", "positive_passages": [{"docid": "doc-en-kubernetes-631bc5da562ebc33bd49790f3078648639a2126878c5fc7773cd8143bd3bdf27", "text": "TestRoundTripSocks5AndNewConnection Disabled by No response No response No response /sig\nThere are no sig labels on this issue. Please add an appropriate label by using one of the following commands: - Please see the for a listing of the SIGs, working groups, and committees available. podWaitTimeout = 3 * time.Minute postStartWaitTimeout = 2 * time.Minute preStopWaitTimeout = 30 * time.Second ) Context(\"when create a pod with lifecycle hook\", func() { var targetIP string podHandleHookRequest := &v1.Pod{ ObjectMeta: metav1.ObjectMeta{ Name: \"pod-handle-http-request\", }, Spec: v1.PodSpec{ Containers: []v1.Container{ { Name: \"pod-handle-http-request\", Image: \"gcr.io/google_containers/netexec:1.7\", Ports: []v1.ContainerPort{ { ContainerPort: 8080, Protocol: v1.ProtocolTCP, }, }, }, }, }, } BeforeEach(func() { podClient = f.PodClient() By(\"create the container to handle the HTTPGet hook request.\") newPod := podClient.CreateSync(podHandleHookRequest) targetIP = newPod.Status.PodIP }) Context(\"when it is exec hook\", func() { var file string testPodWithExecHook := func(podWithHook *v1.Pod) { podCheckHook := getExecHookTestPod(\"pod-check-hook\", // Wait until the file is created. []string{\"sh\", \"-c\", fmt.Sprintf(\"while [ ! -e %s ]; do sleep 1; done\", file)}, ) By(\"create the pod with lifecycle hook\") podClient.CreateSync(podWithHook) if podWithHook.Spec.Containers[0].Lifecycle.PostStart != nil { By(\"create the hook check pod\") podClient.Create(podCheckHook) By(\"wait for the hook check pod to success\") podClient.WaitForSuccess(podCheckHook.Name, postStartWaitTimeout) } By(\"delete the pod with lifecycle hook\") podClient.DeleteSync(podWithHook.Name, metav1.NewDeleteOptions(15), framework.DefaultPodDeletionTimeout) if podWithHook.Spec.Containers[0].Lifecycle.PreStop != nil { By(\"create the hook check pod\") podClient.Create(podCheckHook) By(\"wait for the prestop check pod to success\") podClient.WaitForSuccess(podCheckHook.Name, preStopWaitTimeout) } testPodWithHook := func(podWithHook *v1.Pod) { By(\"create the pod with lifecycle hook\") podClient.CreateSync(podWithHook) if podWithHook.Spec.Containers[0].Lifecycle.PostStart != nil { By(\"check poststart hook\") Eventually(func() error { return podClient.MatchContainerOutput(podHandleHookRequest.Name, podHandleHookRequest.Spec.Containers[0].Name, `GET /echo?msg=poststart`) }, postStartWaitTimeout, podCheckInterval).Should(BeNil()) } BeforeEach(func() { file = \"/tmp/test-\" + string(uuid.NewUUID()) }) AfterEach(func() { By(\"cleanup the temporary file created in the test.\") cleanupPod := getExecHookTestPod(\"pod-clean-up\", []string{\"rm\", file}) podClient.Create(cleanupPod) podClient.WaitForSuccess(cleanupPod.Name, podWaitTimeout) }) It(\"should execute poststart exec hook properly [Conformance]\", func() { podWithHook := getExecHookTestPod(\"pod-with-poststart-exec-hook\", // Block forever []string{\"tail\", \"-f\", \"/dev/null\"}, ) podWithHook.Spec.Containers[0].Lifecycle = &v1.Lifecycle{ PostStart: &v1.Handler{ Exec: &v1.ExecAction{Command: []string{\"touch\", file}}, }, } testPodWithExecHook(podWithHook) }) It(\"should execute prestop exec hook properly [Conformance]\", func() { podWithHook := getExecHookTestPod(\"pod-with-prestop-exec-hook\", // Block forever []string{\"tail\", \"-f\", \"/dev/null\"}, ) podWithHook.Spec.Containers[0].Lifecycle = &v1.Lifecycle{ PreStop: &v1.Handler{ Exec: &v1.ExecAction{Command: []string{\"touch\", file}}, By(\"delete the pod with lifecycle hook\") podClient.DeleteSync(podWithHook.Name, metav1.NewDeleteOptions(15), framework.DefaultPodDeletionTimeout) if podWithHook.Spec.Containers[0].Lifecycle.PreStop != nil { By(\"check prestop hook\") Eventually(func() error { return podClient.MatchContainerOutput(podHandleHookRequest.Name, podHandleHookRequest.Spec.Containers[0].Name, `GET /echo?msg=prestop`) }, preStopWaitTimeout, podCheckInterval).Should(BeNil()) } } It(\"should execute poststart exec hook properly [Conformance]\", func() { lifecycle := &v1.Lifecycle{ PostStart: &v1.Handler{ Exec: &v1.ExecAction{ Command: []string{\"sh\", \"-c\", \"curl http://\" + targetIP + \":8080/echo?msg=poststart\"}, }, } testPodWithExecHook(podWithHook) }) }) Context(\"when it is http hook\", func() { var targetIP string podHandleHookRequest := &v1.Pod{ ObjectMeta: metav1.ObjectMeta{ Name: \"pod-handle-http-request\", }, Spec: v1.PodSpec{ Containers: []v1.Container{ { Name: \"pod-handle-http-request\", Image: \"gcr.io/google_containers/netexec:1.7\", Ports: []v1.ContainerPort{ { ContainerPort: 8080, Protocol: v1.ProtocolTCP, }, }, }, } podWithHook := getPodWithHook(\"pod-with-poststart-exec-hook\", \"gcr.io/google_containers/hostexec:1.2\", lifecycle) testPodWithHook(podWithHook) }) It(\"should execute prestop exec hook properly [Conformance]\", func() { lifecycle := &v1.Lifecycle{ PreStop: &v1.Handler{ Exec: &v1.ExecAction{ Command: []string{\"sh\", \"-c\", \"curl http://\" + targetIP + \":8080/echo?msg=prestop\"}, }, }, } BeforeEach(func() { By(\"create the container to handle the HTTPGet hook request.\") newPod := podClient.CreateSync(podHandleHookRequest) targetIP = newPod.Status.PodIP }) testPodWithHttpHook := func(podWithHook *v1.Pod) { By(\"create the pod with lifecycle hook\") podClient.CreateSync(podWithHook) if podWithHook.Spec.Containers[0].Lifecycle.PostStart != nil { By(\"check poststart hook\") Eventually(func() error { return podClient.MatchContainerOutput(podHandleHookRequest.Name, podHandleHookRequest.Spec.Containers[0].Name, `GET /echo?msg=poststart`) }, postStartWaitTimeout, podCheckInterval).Should(BeNil()) } By(\"delete the pod with lifecycle hook\") podClient.DeleteSync(podWithHook.Name, metav1.NewDeleteOptions(15), framework.DefaultPodDeletionTimeout) if podWithHook.Spec.Containers[0].Lifecycle.PreStop != nil { By(\"check prestop hook\") Eventually(func() error { return podClient.MatchContainerOutput(podHandleHookRequest.Name, podHandleHookRequest.Spec.Containers[0].Name, `GET /echo?msg=prestop`) }, preStopWaitTimeout, podCheckInterval).Should(BeNil()) } } It(\"should execute poststart http hook properly [Conformance]\", func() { podWithHook := &v1.Pod{ ObjectMeta: metav1.ObjectMeta{ Name: \"pod-with-poststart-http-hook\", }, Spec: v1.PodSpec{ Containers: []v1.Container{ { Name: \"pod-with-poststart-http-hook\", Image: framework.GetPauseImageNameForHostArch(), Lifecycle: &v1.Lifecycle{ PostStart: &v1.Handler{ HTTPGet: &v1.HTTPGetAction{ Path: \"/echo?msg=poststart\", Host: targetIP, Port: intstr.FromInt(8080), }, }, }, }, }, }, } testPodWithHttpHook(podWithHook) }) It(\"should execute prestop http hook properly [Conformance]\", func() { podWithHook := &v1.Pod{ ObjectMeta: metav1.ObjectMeta{ Name: \"pod-with-prestop-http-hook\", podWithHook := getPodWithHook(\"pod-with-prestop-exec-hook\", \"gcr.io/google_containers/hostexec:1.2\", lifecycle) testPodWithHook(podWithHook) }) It(\"should execute poststart http hook properly [Conformance]\", func() { lifecycle := &v1.Lifecycle{ PostStart: &v1.Handler{ HTTPGet: &v1.HTTPGetAction{ Path: \"/echo?msg=poststart\", Host: targetIP, Port: intstr.FromInt(8080), }, Spec: v1.PodSpec{ Containers: []v1.Container{ { Name: \"pod-with-prestop-http-hook\", Image: framework.GetPauseImageNameForHostArch(), Lifecycle: &v1.Lifecycle{ PreStop: &v1.Handler{ HTTPGet: &v1.HTTPGetAction{ Path: \"/echo?msg=prestop\", Host: targetIP, Port: intstr.FromInt(8080), }, }, }, }, }, }, } podWithHook := getPodWithHook(\"pod-with-poststart-http-hook\", framework.GetPauseImageNameForHostArch(), lifecycle) testPodWithHook(podWithHook) }) It(\"should execute prestop http hook properly [Conformance]\", func() { lifecycle := &v1.Lifecycle{ PreStop: &v1.Handler{ HTTPGet: &v1.HTTPGetAction{ Path: \"/echo?msg=prestop\", Host: targetIP, Port: intstr.FromInt(8080), }, } testPodWithHttpHook(podWithHook) }) }, } podWithHook := getPodWithHook(\"pod-with-prestop-http-hook\", framework.GetPauseImageNameForHostArch(), lifecycle) testPodWithHook(podWithHook) }) }) }) func getExecHookTestPod(name string, cmd []string) *v1.Pod { func getPodWithHook(name string, image string, lifecycle *v1.Lifecycle) *v1.Pod { return &v1.Pod{ ObjectMeta: metav1.ObjectMeta{ Name: name,", "positive_passages": [{"docid": "doc-en-kubernetes-523d59ec5104a53f44aa3efaa79fa37e525dbabb537793d9e95c31ff14a3e1c1", "text": "The prestop and poststart exec tests volume mount /tmp from the host and attempt to create a file there from inside the pod. This is blocked by selinux. Failure:\nwe need to have a mechanism to say that a test should not be run if selinux is enabled (if legitimately required) or fix each test (where appropriate)", "commid": "kubernetes_issue_42905", "tokennum": 76}], "negative_passages": []}
{"query_id": "q-en-kubernetes-0065dda5385eb3b17e6502fa715d517455f15bfce77f0e78077e4ff98bdab5bf", "query": "clusterAddonLabelKey = \"k8s-app\" kubeAPIServerLabelName = \"kube-apiserver\" clusterComponentKey = \"component\" svcReadyTimeout = 1 * time.Minute ) var (", "positive_passages": [{"docid": "doc-en-kubernetes-c5b2d72b2e785256d7a2577807fdbd43ef000a4da594f69c949ac55e4186d114", "text": "[x] two weeks soak end date : 18 March 2021 According to this APIsnoop query, there are still some remaining RESOURCENAME endpoints which are untested. with this query you can filter untested endpoints by their category and eligiblity for conformance. e.g below shows a query to find all conformance eligible untested,stable,core endpoints Note: Community feedback for the e2e test has lead to a number of improvements including extending the test coverage to all outstanding endpoints listed above. As each endpoint is tested a ’watch’ confirms that the result is valid before testing the next endpoint. readCoreV1NamespacedServiceStatus (get /status) patchCoreV1NamespacedServiceStatus (patch /status) replaceCoreV1NamespacedServiceStatus (update /status) patchCoreV1NamespacedService Create a Service with a static label Patch the Service with a new label and updated data Get the Service to ensure it’s patched Upate the Service with a new label and updated data Get the Service to ensure it’s updated Delete Namespaced Service via a Collection with a LabelSelector This query by apisnoop shows that all outstanding endpoints as list at the start of this document have been hit by the e2e test. Note that the results do include other service endpoints that have been addressed in other conformance tests. If a test with these calls gets merged, test coverage will go up by 4 points This test is also created with the goal of conformance promotion. /sig testing /sig architecture /area conformance\nCan you please review the proposed test and advise if we could go ahead with a PR\n/assign\nIt looks like it's patching metadata and spec but not status?\nIssues go stale after 90d of inactivity. Mark the issue as fresh with . Stale issues rot after an additional 30d of inactivity and eventually close. If this issue is safe to close now please do so with . Send feedback to sig-testing, kubernetes/test-infra and/or . /lifecycle stale\n/remove-lifecycle stale\nI've been retesting this mock test and apisnoop does log as been hit. From my testing below it looks like the patchStatus is working as expected or have I missed something?", "commid": "kubernetes_issue_94867", "tokennum": 517}], "negative_passages": []}
{"query_id": "q-en-kubernetes-0065dda5385eb3b17e6502fa715d517455f15bfce77f0e78077e4ff98bdab5bf", "query": "clusterAddonLabelKey = \"k8s-app\" kubeAPIServerLabelName = \"kube-apiserver\" clusterComponentKey = \"component\" svcReadyTimeout = 1 * time.Minute ) var (", "positive_passages": [{"docid": "doc-en-kubernetes-6c98fb90bee2cd0d2fc404edc784deab934fddaa81022ed1883862c51f5dae6d", "text": "Using the following code before the patch request Gives this response Then after running the patch ServiceStatus Then running The response shows the patched changes.\nDuring the Conformance Meeting on 12th of January 2021, reviewed approach with Clayton, and we will start on the test PR this week!\n/reopen\nReopened this issue. case expected.GetAggregationRule() == nil && existing.GetAggregationRule() != nil: // we didn't expect this to be an aggregated role at all, remove the existing aggregation result.Role.SetAggregationRule(nil) result.Operation = ReconcileUpdate case !removeExtraPermissions && len(result.MissingAggregationRuleSelectors) > 0: // add missing rules in the union case aggregationRule := result.Role.GetAggregationRule()", "positive_passages": [{"docid": "doc-en-kubernetes-cf86a1652325e783afd2e6de68ee19a134c3d5bc67e73b574ad8b67c1852b0fb", "text": "/kind bug What happened: I deployed a fresh custom HA kubernetes v1.10.2 cluster with 3 nodes (all controller and worker currently, more workers planned). I then created a namespace called \"stummi\" and a rolebinding with \"user=stummi clusterrole=admin\" in this namespace. User stummi (successfully authenticated by TLS client certificate) was not able to list pods or anything else in his namespace: While trying to figure things out, I downgraded to v1.8.11 (a version without cluster role aggregation, that worked) and also tried v1.11 alpha2 (which did not work). I never purged my etcd and I sadly do not have logs from before the different versions. Shortly after posting on slack and before writing this ticket, I dumped the cluster roles in question: Note the rules. What you expected to happen: I expected user stummi to be able to do most things in his namespace. How to reproduce it (as minimally and precisely as possible): Anything else we need to know?: I posted this problem on slack and helped me and wanted this issue opened. /sig auth The cluster is deployed with ansible and I can give the playbooks when required. Environment: Kubernetes version (use ):Cloud provider or hardware configuration: 3 virtual machines (2 cores, 8GB memory) OS (e.g. from /etc/os-release): Debian Stretch Kernel (e.g. ): Install tools: custom ansible playbooks Others: Please let me know if you need more information. I'm happy to provide everything you need to fix this :)\n/assign", "commid": "kubernetes_issue_63760", "tokennum": 363}], "negative_passages": []}
{"query_id": "q-en-kubernetes-0073702efbb5981682ba3c908601a3ddb5372466bf4d5913e21b59c06db1e6e0", "query": "{utiliptables.TableNAT, KubeNodePortChain}, {utiliptables.TableNAT, KubeLoadBalancerChain}, {utiliptables.TableNAT, KubeMarkMasqChain}, {utiliptables.TableNAT, KubeMarkDropChain}, {utiliptables.TableFilter, KubeForwardChain}, } var iptablesEnsureChains = []struct { table utiliptables.Table chain utiliptables.Chain }{ {utiliptables.TableNAT, KubeMarkDropChain}, } var iptablesCleanupChains = []struct { table utiliptables.Table chain utiliptables.Chain", "positive_passages": [{"docid": "doc-en-kubernetes-6cbf7dcb5e6486cb532532a59cda0ac4f93a00ec9c96bc20136b3a20d89cc61b", "text": "
PLEASE NOTE: This document applies to the HEAD of the source tree
If you are using a released version of Kubernetes, you should refer to the docs that go with that version. The latest 1.0.x release of this document can be found [here](http://releases.k8s.io/release-1.0/docs/devel/adding-an-APIGroup.md). Documentation for other releases can be found at [releases.k8s.io](http://releases.k8s.io). -- Adding an API Group =============== This document includes the steps to add an API group. You may also want to take a look at PR [#16621](https://github.com/kubernetes/kubernetes/pull/16621) and PR [#13146](https://github.com/kubernetes/kubernetes/pull/13146), which add API groups. Please also read about [API conventions](api-conventions.md) and [API changes](api_changes.md) before adding an API group. ### Your core group package: We plan on improving the way the types are factored in the future; see [#16062](https://github.com/kubernetes/kubernetes/pull/16062) for the directions in which this might evolve. 1. Create a folder in pkg/apis to hold you group. Create types.go in pkg/apis/``/ and pkg/apis/``/``/ to define API objects in your group; 2. Create pkg/apis/``/{register.go, ``/register.go} to register this group's API objects to the encoding/decoding scheme (e.g., [pkg/apis/extensions/register.go](../../pkg/apis/extensions/register.go) and [pkg/apis/extensions/v1beta1/register.go](../../pkg/apis/extensions/v1beta1/register.go); 3. Add a pkg/apis/``/install/install.go, which is responsible for adding the group to the `latest` package, so that other packages can access the group's meta through `latest.Group`. You probably only need to change the name of group and version in the [example](../../pkg/apis/extensions/install/install.go)). You need to import this `install` package in {pkg/master, pkg/client/unversioned, cmd/kube-version-change}/import_known_versions.go, if you want to make your group accessible to other packages in the kube-apiserver binary, binaries that uses the client package, or the kube-version-change tool. Step 2 and 3 are mechanical, we plan on autogenerate these using the cmd/libs/go2idl/ tool. ### Scripts changes and auto-generated code: 1. Generate conversions and deep-copies: 1. Add your \"group/\" or \"group/version\" into hack/after-build/{update-generated-conversions.sh, update-generated-deep-copies.sh, verify-generated-conversions.sh, verify-generated-deep-copies.sh}; 2. Run hack/update-generated-conversions.sh, hack/update-generated-deep-copies.sh. 2. Generate files for Ugorji codec: 1. Touch types.generated.go in pkg/apis/``{/, ``}; 2. Run hack/update-codecgen.sh. ### Client (optional): We are overhauling pkg/client, so this section might be outdated; see [#15730](https://github.com/kubernetes/kubernetes/pull/15730) for how the client package might evolve. Currently, to add your group to the client package, you need to 1. Create pkg/client/unversioned/``.go, define a group client interface and implement the client. You can take pkg/client/unversioned/extensions.go as a reference. 2. Add the group client interface to the `Interface` in pkg/client/unversioned/client.go and add method to fetch the interface. Again, you can take how we add the Extensions group there as an example. 3. If you need to support the group in kubectl, you'll also need to modify pkg/kubectl/cmd/util/factory.go. ### Make the group/version selectable in unit tests (optional): 1. Add your group in pkg/api/testapi/testapi.go, then you can access the group in tests through testapi.``; 2. Add your \"group/version\" to `KUBE_API_VERSIONS` and `KUBE_TEST_API_VERSIONS` in hack/test-go.sh. []() ", "positive_passages": [{"docid": "doc-en-kubernetes-f256e345b7b51b103065fba025a54804b2fa9969c4fe88ae4c8d8670e2ed79f1", "text": "AFAIK, we don't have a documentation on how to add a new API group. Currently we have and that try to add a new API group. I will draft a guide to ease the future endeavor of adding groups. And probably by writing this guide, I will see how can we make the API group machinery easier to use. cc", "commid": "kubernetes_issue_16626", "tokennum": 70}], "negative_passages": []}
{"query_id": "q-en-kubernetes-231f42e716de7cff047514924c7d838007ada0188f38ccfbe935000e3482ef18", "query": "} // prepare kube clients. client, leaderElectionClient, eventClient, err := createClients(c.ComponentConfig.ClientConnection, o.Master) client, leaderElectionClient, eventClient, err := createClients(c.ComponentConfig.ClientConnection, o.Master, c.ComponentConfig.LeaderElection.RenewDeadline.Duration) if err != nil { return nil, err }", "positive_passages": [{"docid": "doc-en-kubernetes-638d244f4d44257351ae4786f705d88f47a0e12c74ce71a459164cb07c9c1943", "text": "
PLEASE NOTE: This document applies to the HEAD of the source tree
If you are using a released version of Kubernetes, you should refer to the docs that go with that version. The latest 1.0.x release of this document can be found [here](http://releases.k8s.io/release-1.0/docs/user-guide/kubectl-overview.md). Documentation for other releases can be found at [releases.k8s.io](http://releases.k8s.io). -- # kubectl overview **Table of Contents** - [kubectl overview](#kubectl-overview) - [Overview](#overview) - [Common Operations](#common-operations) - [Kubectl Operations](#kubectl-operations) - [Resource Types](#resource-types) This overview is intended for anyone who wants to use `kubectl` command line tool to interact with Kubernetes cluster. Please remember that it is built for quick started with `kubectl`; for complete and detailed information, please refer to [kubectl](kubectl/kubectl.md). TODO: auto-generate this file to stay up with `kubectl` changes. Please see [#14177](https://github.com/kubernetes/kubernetes/pull/14177). ## Overview `kubectl` controls the Kubernetes cluster manager. The synopsis is: ``` kubectl [command] [TYPE] [NAME] [flags] ``` This specifies: - `command` is a certain operation performed on a given resource(s), such as `create`, `get`, `describe`, `delete` etc. - `TYPE` is the type of resource(s). Both singular and plural forms are accepted. For example, `node(s)`, `namespace(s)`, `pod(s)`, `replicationcontroller(s)`, `service(s)` etc. - `NAME` is the name of resource(s). `TYPE NAME` can be specified as `TYPE name1 name2` or `TYPE/name1 TYPE/name2`. `TYPE NAME` can also be specified by one or more file arguments: `-f file1 -f file2 ...`, [use YAML rather than JSON](config-best-practices.md) since YAML tends to be more user-friendly for config. - `flags` are used to provide more control information when running a command. For example, you can use `-s` or `--server` to specify the address and port of the Kubernetes API server. Command line flags override their corresponding default values and environment variables. [Use short flags sparingly, only for the most frequently used options](../devel/kubectl-conventions.md). Please use `kubectl help [command]` for detailed information about a command. Please refer to [kubectl](kubectl/kubectl.md) for a complete list of available commands and flags. ## Common Operations For explanation, here I gave some mostly often used `kubectl` command examples. Please replace sample names with actual values if you would like to try these commands. 1. `kubectl create` - Create a resource by filename or stdin // Create a service using the data in example-service.yaml. $ kubectl create -f example-service.yaml // Create a replication controller using the data in example-controller.yaml. $ kubectl create -f example-controller.yaml // Create objects whose definitions are in a directory. This looks for config objects in all .yaml, .yml, and .json files in and passes them to create. $ kubectl create -f 2. `kubectl get` - Display one or many resources // List all pods in ps output format. $ kubectl get pods // List all pods in ps output format with more information (such as node name). $ kubectl get pods -o wide // List a single replication controller with specified name in ps output format. You can use the alias 'rc' instead of 'replicationcontroller'. $ kubectl get replicationcontroller // List all replication controllers and services together in ps output format. $ kubectl get rc,services 3. `kubectl describe` - Show details of a specific resource or group of resources // Describe a node $ kubectl describe nodes // Describe a pod $ kubectl describe pods/ // Describe all pods managed by the replication controller // (rc-created pods get the name of the rc as a prefix in the pod the name). $ kubectl describe pods 4. `kubectl delete` - Delete resources by filenames, stdin, resources and names, or by resources and label selector // Delete a pod using the type and name specified in pod.yaml. $ kubectl delete -f pod.yaml // Delete pods and services with label name=. $ kubectl delete pods,services -l name= // Delete all pods $ kubectl delete pods --all 5. `kubectl exec` - Execute a command in a container // Get output from running 'date' from pod , using the first container by default. $ kubectl exec date // Get output from running 'date' in from pod . $ kubectl exec -c date // Get an interactive tty and run /bin/bash from pod , using the first container by default. $ kubectl exec -ti /bin/bash 6. `kubectl logs` - Print the logs for a container in a pod. // Returns snapshot of logs from pod . $ kubectl logs // Starts streaming of logs from pod , it is something like 'tail -f'. $ kubectl logs -f ## Kubectl Operations The following table describes all `kubectl` operations and their general synopsis: Operation | Synopsis\t| Description -------------------- | -------------------- | -------------------- annotate\t| `kubectl annotate [--overwrite] (-f FILENAME | TYPE NAME) KEY_1=VAL_1 ... KEY_N=VAL_N [--resource-version=version]` | Update the annotations on a resource api-versions\t| `kubectl api-versions` | Print available API versions attach\t\t| `kubectl attach POD -c CONTAINER` | Attach to a running container cluster-info\t| `kubectl cluster-info` | Display cluster info config\t\t| `kubectl config SUBCOMMAND` | Modifies kubeconfig files create\t\t| `kubectl create -f FILENAME` | Create a resource by filename or stdin delete\t\t| `kubectl delete ([-f FILENAME] | TYPE [(NAME | -l label | --all)])` | Delete resources by filenames, stdin, resources and names, or by resources and label selector describe\t| `kubectl describe (-f FILENAME | TYPE [NAME_PREFIX | -l label] | TYPE/NAME)` | Show details of a specific resource or group of resources edit\t\t| `kubectl edit (RESOURCE/NAME | -f FILENAME)` | Edit a resource on the server exec\t\t| `kubectl exec POD [-c CONTAINER] -- COMMAND [args...]` | Execute a command in a container expose\t\t| `kubectl expose (-f FILENAME | TYPE NAME) [--port=port] [--protocol=TCP|UDP] [--target-port=number-or-name] [--name=name] [----external-ip=external-ip-of-service] [--type=type]` | Take a replication controller, service or pod and expose it as a new Kubernetes Service get\t\t| `kubectl get [(-o|--output=)json|yaml|wide|go-template=...|go-template-file=...|jsonpath=...|jsonpath-file=...] (TYPE [NAME | -l label] | TYPE/NAME ...) [flags]` | Display one or many resources label\t\t| `kubectl label [--overwrite] (-f FILENAME | TYPE NAME) KEY_1=VAL_1 ... KEY_N=VAL_N [--resource-version=version]` | Update the labels on a resource logs\t\t| `kubectl logs [-f] [-p] POD [-c CONTAINER]` | Print the logs for a container in a pod namespace\t| `kubectl namespace [namespace]` | SUPERSEDED: Set and view the current Kubernetes namespace patch\t\t| `kubectl patch (-f FILENAME | TYPE NAME) -p PATCH` | Update field(s) of a resource by stdin port-forward\t| `kubectl port-forward POD [LOCAL_PORT:]REMOTE_PORT [...[LOCAL_PORT_N:]REMOTE_PORT_N]` | Forward one or more local ports to a pod proxy\t\t| `kubectl proxy [--port=PORT] [--www=static-dir] [--www-prefix=prefix] [--api-prefix=prefix]` | Run a proxy to the Kubernetes API server replace\t\t| `kubectl replace -f FILENAME` | Replace a resource by filename or stdin rolling-update\t| `kubectl rolling-update OLD_CONTROLLER_NAME ([NEW_CONTROLLER_NAME] --image=NEW_CONTAINER_IMAGE | -f NEW_CONTROLLER_SPEC)` | Perform a rolling update of the given ReplicationController run\t\t| `kubectl run NAME --image=image [--env=\"key=value\"] [--port=port] [--replicas=replicas] [--dry-run=bool] [--overrides=inline-json]` | Run a particular image on the cluster scale\t\t| `kubectl scale [--resource-version=version] [--current-replicas=count] --replicas=COUNT (-f FILENAME | TYPE NAME)` | Set a new size for a Replication Controller stop\t\t| `kubectl stop (-f FILENAME | TYPE (NAME | -l label | --all))` | Deprecated: Gracefully shut down a resource by name or filename version\t\t| `kubectl version` | Print the client and server version information ## Resource Types The `kubectl` supports the following resource types, and their abbreviated aliases: Resource Type\t| Abbreviated Alias -------------------- | -------------------- componentstatuses\t|\tcs events\t|\tev endpoints\t|\tep horizontalpodautoscalers\t|\thpa limitranges\t|\tlimits nodes\t|\tno namespaces\t|\tns pods\t|\tpo persistentvolumes\t|\tpv persistentvolumeclaims\t|\tpvc resourcequotas\t|\tquota replicationcontrollers\t|\trc daemonsets\t|\tds services\t|\tsvc []() ", "positive_passages": [{"docid": "doc-en-kubernetes-ff400dc6754846fd0992db1028d6e5a3aef2be3cc27ba2c2792aa3874bf81494", "text": "Something like: cc\ncc\n- Can you please look at this issue since you're recently working on documentation?\nthanks for the reminder. I will take care of it.", "commid": "kubernetes_issue_11814", "tokennum": 34}], "negative_passages": []}
{"query_id": "q-en-kubernetes-3efdca3d7f3f8a011b4c9c0e64c2312418fac47ef7639a403c56b324c2c90619", "query": "memoryUsage, err := strconv.ParseFloat(lines[0], 64) framework.ExpectNoError(err) var rssToken, inactiveFileToken string if isCgroupV2 { // Use Anon memory for RSS as cAdvisor on cgroupv2 // see https://github.com/google/cadvisor/blob/a9858972e75642c2b1914c8d5428e33e6392c08a/container/libcontainer/handler.go#L799 rssToken = \"anon\" inactiveFileToken = \"inactive_file\" } else { rssToken = \"total_rss\" inactiveFileToken = \"total_inactive_file\" } var totalInactiveFile float64 for _, line := range lines[1:] { tokens := strings.Split(line, \" \") if tokens[0] == \"total_rss\" { if tokens[0] == rssToken { rss, err = strconv.ParseFloat(tokens[1], 64) framework.ExpectNoError(err) } if tokens[0] == \"total_inactive_file\" { if tokens[0] == inactiveFileToken { totalInactiveFile, err = strconv.ParseFloat(tokens[1], 64) framework.ExpectNoError(err) }", "positive_passages": [{"docid": "doc-en-kubernetes-0c07e91eb7d66ee92c3b6d05f523cece58039dffcf1123c44ba752c30922deab", "text": "N/A [sig-node] NodeProblemDetector: N/A https://prow- Not cgroupv2 compatible when access cpu and memory data\n/sig node\nHi, can i work on this ?\nSure. Thanks\n/assign\n/triage accepted\n/priority important-soon\n/cc\nare you still working on this? Were you able to setup environment to repro the test failure? Let us know if you need any help. Thanks!\nSorry for the delay, I followed the wrong guide and ended up getting stuck. But now I have managed to set it up. Please give me few more days to work on it.\nThanks Feel free to reach out here or on slack if you need any help, thanks for working on this!\n/retitle Make nodeproblemdetector test cgroupv2 compatible\nFailing job:\nTestgrid:", "commid": "kubernetes_issue_105178", "tokennum": 180}], "negative_passages": []}
{"query_id": "q-en-kubernetes-3f2345f0f9e0c7e497537e977ca89af11fa266b9e9fe1b2eec343b4dacf05b3d", "query": "// How long to wait for a log pod to be displayed const podLogTimeout = 45 * time.Second // utility function for gomega Eventually func getPodLogs(c *client.Client, namespace, podName, containerName string) (string, error) { logs, err := c.Get().Resource(\"pods\").Namespace(namespace).Name(podName).SubResource(\"log\").Param(\"container\", containerName).Do().Raw() if err != nil { return \"\", err } if err == nil && strings.Contains(string(logs), \"Internal Error\") { return \"\", fmt.Errorf(\"Internal Error\") } return string(logs), err } var _ = Describe(\"Downward API volume\", func() { f := NewFramework(\"downward-api\") It(\"should provide podname only [Conformance]\", func() {", "positive_passages": [{"docid": "doc-en-kubernetes-cf37b0ab7a08d0e2457507bc271b1cb7d1e13fbaee3773971363bc32ae71b35b", "text": "We are running heapster, dns server, kube-proxy etc. in kube-system namespace. Some e2e tests try to make sure all those kube-system pods are at ready state before firing the real tests, others just simply ingore them. But both cases might cause e2e flakiness when some of those kube-system containers are in crashloop, for example, for heapster crashloop, for influxdb crashloop, etc. When such cases happen, the developer doesn't have much information on why since the nodes might be teared down. We should watch those kube-system pods, and onContainerFailure, we should collect the related logs. One simple way is using kubectl logs \"//federation/apis/federation:go_default_library\", \"//federation/apis/federation/v1beta1:go_default_library\", \"//federation/client/clientset_generated/federation_clientset/fake:go_default_library\", \"//pkg/api:go_default_library\",", "positive_passages": [{"docid": "doc-en-kubernetes-f29982137c07dadb3e257d7391e0d9b7bfd5f1ce8fdca72f798301cc159728a0", "text": "Is this a BUG REPORT or FEATURE REQUEST?: /kind bug What happened: After labeling clusters in a Federation, when you type the field is empty. What you expected to happen: The fiels should contain information about the labels in the cluster How to reproduce it (as minimally and precisely as possible): a federation control pane in any Cloud Provider and any DNS provider your cluster: info aboyut the cluster with the flag: Note: if you describe it, you see the labels being : Environment: Kubernetes version (use ): 1.8.0 Cloud provider or hardware configuration**: anyone OS (e.g. from /etc/os-release): Ubuntu 16.04 Kernel (e.g. ): Linux Install tools: curl Others:\n/sig multicluster\nI'm working on it.", "commid": "kubernetes_issue_53729", "tokennum": 179}], "negative_passages": []}
{"query_id": "q-en-kubernetes-3f9cf48897560087502c9fc20145c6bd87578c02d4e203879f6d77ab89a391dd", "query": "glog.Infof(\"Running tests for APIVersion: %s\", apiVersion) manifestURL := ServeCachedManifestFile() apiServerURL, configFilePath := startComponents(manifestURL, apiVersion) firstManifestURL := ServeCachedManifestFile(testPodSpecFile) secondManifestURL := ServeCachedManifestFile(testManifestFile) apiServerURL, configFilePath := startComponents(firstManifestURL, secondManifestURL, apiVersion) // Ok. we're good to go. glog.Infof(\"API Server started on %s\", apiServerURL)", "positive_passages": [{"docid": "doc-en-kubernetes-56027fa26048aea64718f2bd7d7c5d6f58ddca1472c1fc738c03141b6dea3ad6", "text": "Currently it assumes it can read ContainerManifest - it should try both\nWhy not hold on this until we have Kubelet-read-Pod in?\nAlso the HTTP source has the same problem.\nIt should read Pod. Changed the title.\nIdeally, I'd like all Pods created directly via Kubelet to be to etcd and surfaced via the API. If updates are made via the API, I'd like those to propagate back to the node. We can discuss whether the config file should be updated or ignored. At some point, I'd like Kubelet to persistently cache pods received from the apiserver, perhaps by just writing them all out as config files.\nis the issue to sync local pods back to apiserver.\nthis is somewhat related to other issues we talked about.\nDetail: We want locally specified pods to be able to use YAML config files, but we also want to rip YAML out of the API server . The config-file behavior should be as similar to the kubectl experience. Kubelet could use parts of the kubectl library to translate YAML to JSON. Vision: Eventually, I'd like Kubelet to use the apiserver library and support exactly the same Pod API as the master, at which time I'd like kubectl be able to create pods directly on Kubelets, as well as reading from config files. I also expect to have a per-node controller . I'd like to be able to use the per-node controller to update pods started via config files (or direct kubectl-kubelet pods), overriding those original files. For availability and self-hosting, we'll eventually need local checkpointing , ideally just using serialized JSON for each object. If Kubelet used apiserver, this could be done using a custom registry implementation.\nI think that is a prerequisite for it.\nIf noone picks it up, I'm going to start working on it tomorrow.\nthanks, wojtek-t\nfixed it for http channel", "commid": "kubernetes_issue_3372", "tokennum": 445}], "negative_passages": []}
{"query_id": "q-en-kubernetes-3feb39c3f7e2878793e9de39945777fbfef3e77605194d0e74efb65b3f0775f1", "query": "// buildELBSecurityGroupList returns list of SecurityGroups which should be // attached to ELB created by a service. List always consist of at least // 1 member which is an SG created for this service. Extra groups can be // 1 member which is an SG created for this service or a SG from the Global config. Extra groups can be // specified via annotation func (c *Cloud) buildELBSecurityGroupList(serviceName types.NamespacedName, loadBalancerName, annotation string) ([]string, error) { var err error", "positive_passages": [{"docid": "doc-en-kubernetes-d6a2e132f72678770d8d41e08acf9f7359a4b887e33d930dbbc18ea7ecc9db7e", "text": "/kind bug What happened: When setting an elbSecurityGroup in for AWS, and deploying multiple services with differing ports, then the security group rules flap in AWS, causing services to become inaccessible (only one will be correctly configured at a time). What you expected to happen: I expect that the configured elbSecurityGroup's rules will not be touched at all. It must be pre-configured to permit access. Alternatively, all ports for all services are queried and managed together. How to reproduce it (as minimally and precisely as possible): See above. Anything else we need to know?: Most people are not running large clusters which may hit the AWS limits, so they won't use this option, nor see this issue. Environment: Kubernetes version (use ): 1.7.1 Cloud provider or hardware configuration**: AWS OS (e.g. from /etc/os-release): CentOS 7 Kernel (e.g. ): 4.10.13-1-ARCH Install tools: Custom Others:\nThere are no sig labels on this issue. Please by: a sig: e.g., to notify the contributor experience sig, OR the label manually: e.g., to apply the label Note: Method 1 will trigger an email to the group. You can find the group list and label list . The in the method 1 has to be replaced with one of these: bugs, feature-requests, pr-reviews, test-failures, proposals\n/sig aws\nThis is fixed by", "commid": "kubernetes_issue_50105", "tokennum": 331}], "negative_passages": []}
{"query_id": "q-en-kubernetes-3ff1b8d08100a08f3d19a06321feeddf479f63a137fb9563a2d32342ce18fd1e", "query": "if !apierrors.IsBadRequest(err) { t.Errorf(\"expected HTTP status: BadRequest, got: %#v\", apierrors.ReasonForError(err)) } if err.Error() != expectedError { if !strings.Contains(err.Error(), expectedError) { t.Errorf(\"expected %#v, got %#v\", expectedError, err.Error()) } }", "positive_passages": [{"docid": "doc-en-kubernetes-984aaf2876c4d4651f6765b291626bccfb7d6364087d7ebb11c97e2832c59ee5", "text": " %v:%v (config.clusterIP)\", config.TestContainerPod.Name, config.ClusterIP, e2enetwork.ClusterUDPPort)) err := config.DialFromTestContainer(\"udp\", config.ClusterIP, e2enetwork.ClusterUDPPort, config.MaxTries, 0, config.EndpointHostnames()) if err != nil { framework.Failf(\"failed dialing endpoint, %v\", err) } ginkgo.By(fmt.Sprintf(\"dialing(udp) %v --> %v:%v (nodeIP)\", config.TestContainerPod.Name, config.NodeIP, config.NodeUDPPort)) err = config.DialFromTestContainer(\"udp\", config.NodeIP, config.NodeUDPPort, config.MaxTries, 0, config.EndpointHostnames()) if err != nil { framework.Failf(\"failed dialing endpoint, %v\", err) } }) // if the endpoints pods use hostNetwork, several tests can't run in parallel // because the pods will try to acquire the same port in the host. // We run the test in serial, to avoid port conflicts.", "positive_passages": [{"docid": "doc-en-kubernetes-15fd873e113c2b87c10057fb05584532c0b0add34bb5504a54b7699060ebd74c", "text": "
PLEASE NOTE: This document applies to the HEAD of the source tree
If you are using a released version of Kubernetes, you should refer to the docs that go with that version. Documentation for other releases can be found at [releases.k8s.io](http://releases.k8s.io). -- # Generic Configuration Object ## Abstract This proposal proposes a new API resource, `ConfigMap`, that stores data used for the configuration of applications deployed on `Kubernetes`. The main focus points of this proposal are: * Dynamic distribution of configuration data to deployed applications. * Encapsulate configuration information and simplify `Kubernetes` deployments. * Create a flexible configuration model for `Kubernetes`. ## Motivation A `Secret`-like API resource is needed to store configuration data that pods can consume. Goals of this design: 1. Describe a `ConfigMap` API resource 2. Describe the semantics of consuming `ConfigMap` as environment variables 3. Describe the semantics of consuming `ConfigMap` as files in a volume ## Use Cases 1. As a user, I want to be able to consume configuration data as environment variables 2. As a user, I want to be able to consume configuration data as files in a volume 3. As a user, I want my view of configuration data in files to be eventually consistent with changes to the data ### Consuming `ConfigMap` as Environment Variables Many programs read their configuration from environment variables. `ConfigMap` should be possible to consume in environment variables. The rough series of events for consuming `ConfigMap` this way is: 1. A `ConfigMap` object is created 2. A pod that consumes the configuration data via environment variables is created 3. The pod is scheduled onto a node 4. The kubelet retrieves the `ConfigMap` resource(s) referenced by the pod and starts the container processes with the appropriate data in environment variables ### Consuming `ConfigMap` in Volumes Many programs read their configuration from configuration files. `ConfigMap` should be possible to consume in a volume. The rough series of events for consuming `ConfigMap` this way is: 1. A `ConfigMap` object is created 2. A new pod using the `ConfigMap` via the volume plugin is created 3. The pod is scheduled onto a node 4. The Kubelet creates an instance of the volume plugin and calls its `Setup()` method 5. The volume plugin retrieves the `ConfigMap` resource(s) referenced by the pod and projects the appropriate data into the volume ### Consuming `ConfigMap` Updates Any long-running system has configuration that is mutated over time. Changes made to configuration data must be made visible to pods consuming data in volumes so that they can respond to those changes. The `resourceVersion` of the `ConfigMap` object will be updated by the API server every time the object is modified. After an update, modifications will be made visible to the consumer container: 1. A `ConfigMap` object is created 2. A new pod using the `ConfigMap` via the volume plugin is created 3. The pod is scheduled onto a node 4. During the sync loop, the Kubelet creates an instance of the volume plugin and calls its `Setup()` method 5. The volume plugin retrieves the `ConfigMap` resource(s) referenced by the pod and projects the appropriate data into the volume 6. The `ConfigMap` referenced by the pod is updated 7. During the next iteration of the `syncLoop`, the Kubelet creates an instance of the volume plugin and calls its `Setup()` method 8. The volume plugin projects the updated data into the volume atomically It is the consuming pod's responsibility to make use of the updated data once it is made visible. Because environment variables cannot be updated without restarting a container, configuration data consumed in environment variables will not be updated. ### Advantages * Easy to consume in pods; consumer-agnostic * Configuration data is persistent and versioned * Consumers of configuration data in volumes can respond to changes in the data ## Proposed Design ### API Resource The `ConfigMap` resource will be added to the `extensions` API Group: ```go package api // ConfigMap holds configuration data for pods to consume. type ConfigMap struct { TypeMeta `json:\",inline\"` ObjectMeta `json:\"metadata,omitempty\"` // Data contains the configuration data. Each key must be a valid DNS_SUBDOMAIN or leading // dot followed by valid DNS_SUBDOMAIN. Data map[string]string `json:\"data,omitempty\"` } type ConfigMapList struct { TypeMeta `json:\",inline\"` ListMeta `json:\"metadata,omitempty\"` Items []ConfigMap `json:\"items\"` } ``` A `Registry` implementation for `ConfigMap` will be added to `pkg/registry/configmap`. ### Environment Variables The `EnvVarSource` will be extended with a new selector for `ConfigMap`: ```go package api // EnvVarSource represents a source for the value of an EnvVar. type EnvVarSource struct { // other fields omitted // Specifies a ConfigMap key ConfigMap *ConfigMapSelector `json:\"configMap,omitempty\"` } // ConfigMapSelector selects a key of a ConfigMap. type ConfigMapSelector struct { // The name of the ConfigMap to select a key from. ConfigMapName string `json:\"configMapName\"` // The key of the ConfigMap to select. Key string `json:\"key\"` } ``` ### Volume Source A new `ConfigMapVolumeSource` type of volume source containing the `ConfigMap` object will be added to the `VolumeSource` struct in the API: ```go package api type VolumeSource struct { // other fields omitted ConfigMap *ConfigMapVolumeSource `json:\"configMap,omitempty\"` } // ConfigMapVolumeSource represents a volume that holds configuration data type ConfigMapVolumeSource struct { // A list of configuration data keys to project into the volume in files Files []ConfigMapVolumeFile `json:\"files\"` } // ConfigMapVolumeFile represents a single file containing configuration data type ConfigMapVolumeFile struct { ConfigMapSelector `json:\",inline\"` // The relative path name of the file to be created. // Must not be absolute or contain the '..' path. Must be utf-8 encoded. // The first item of the relative path must not start with '..' Path string `json:\"path\"` } ``` **Note:** The update logic used in the downward API volume plug-in will be extracted and re-used in the volume plug-in for `ConfigMap`. ## Examples #### Consuming `ConfigMap` as Environment Variables ```yaml apiVersion: extensions/v1beta1 kind: ConfigMap metadata: name: etcd-env-config data: number_of_members: 1 initial_cluster_state: new initial_cluster_token: DUMMY_ETCD_INITIAL_CLUSTER_TOKEN discovery_token: DUMMY_ETCD_DISCOVERY_TOKEN discovery_url: http://etcd-discovery:2379 etcdctl_peers: http://etcd:2379 ``` This pod consumes the `ConfigMap` as environment variables: ```yaml apiVersion: v1 kind: Pod metadata: name: config-env-example spec: containers: - name: etcd image: openshift/etcd-20-centos7 ports: - containerPort: 2379 protocol: TCP - containerPort: 2380 protocol: TCP env: - name: ETCD_NUM_MEMBERS valueFrom: configMap: configMapName: etcd-env-config key: number_of_members - name: ETCD_INITIAL_CLUSTER_STATE valueFrom: configMap: configMapName: etcd-env-config key: initial_cluster_state - name: ETCD_DISCOVERY_TOKEN valueFrom: configMap: configMapName: etcd-env-config key: discovery_token - name: ETCD_DISCOVERY_URL valueFrom: configMap: configMapName: etcd-env-config key: discovery_url - name: ETCDCTL_PEERS valueFrom: configMap: configMapName: etcd-env-config key: etcdctl_peers ``` ### Consuming `ConfigMap` as Volumes `redis-volume-config` is intended to be used as a volume containing a config file: ```yaml apiVersion: extensions/v1beta1 kind: ConfigMap metadata: name: redis-volume-config data: redis.conf: \"pidfile /var/run/redis.pidnport6379ntcp-backlog 511n databases 1ntimeout 0n\" ``` The following pod consumes the `redis-volume-config` in a volume: ```yaml apiVersion: v1 kind: Pod metadata: name: config-volume-example spec: containers: - name: redis image: kubernetes/redis command: \"redis-server /mnt/config-map/etc/redis.conf\" ports: - containerPort: 6379 volumeMounts: - name: config-map-volume mountPath: /mnt/config-map volumes: - name: config-map-volume configMap: files: - path: \"etc/redis.conf\" configMapName: redis-volume-config key: redis.conf ``` ### Future Improvements In the future, we may add the ability to specify an init-container that can watch the volume contents for updates and respond to changes when they occur. []() ", "positive_passages": [{"docid": "doc-en-kubernetes-ab39f39ee12e84e818aa9c92c6877733f255a4fad832176aa0525d5bb0051663", "text": "What happened: Visited It shows the release notes for v1.17 What you expected to happen: See the release notes for v1.18 How to reproduce it (as minimally and precisely as possible): Open in any browser or with curl\n/sig docs\n/assign", "commid": "kubernetes_issue_89598", "tokennum": 57}], "negative_passages": []}
{"query_id": "q-en-kubernetes-62fda47231171e9a41a7c7894c0628d6702a7caa56c47e06f6b5e96c373fd2ba", "query": "// Run begins watching and syncing daemon sets. func (dsc *DaemonSetsController) Run(workers int, stopCh <-chan struct{}) { defer util.HandleCrash() glog.Infof(\"Starting Daemon Sets controller manager\") controller.SyncAllPodsWithStore(dsc.kubeClient, dsc.podStore.Store) go dsc.dsController.Run(stopCh) go dsc.podController.Run(stopCh) go dsc.nodeController.Run(stopCh)", "positive_passages": [{"docid": "doc-en-kubernetes-b0a441a371f8db864bebc11c7303837af71a86244bb0f50ec81fe1d8a09d5c71", "text": "The test is in flaky suite and no one seems to care. We need an owner for this test. cc Last failure log:\ndo you think you can take a look at this?\nThis is happening because of a bad assumption. The assumption is: if the reflector's store is consistent with etcd, the store given back to clients of the framework.Controller is too. I'd really like a general fix applicable to all controllers for this problem, but fixing it just for the RCs is pretty easy. In more detail: This function reports when the reflector has listed from apiserver. But the store given to the reflector is the fifo of the framework.Controller. The store given back to the caller is still not consistent, and will not be till the framework.Controller puts all the objects from the fifo into the client state store (). The rc specific fix: Before starting the rc manager, seed the expectations store with an expectation like {rc:foo, adds: foo.Status.Replicas, deletes:0}. This will wait for status.Replicas number of ADD events till a timeout. The general fix: Somehow expose this through framework.Controller.HasSynced (clients don't care about the fifo anyway, shouldn't it just be an implementation detail?) ideas? cc'ing people who have written controllers that might be interested I don't think this (and generally anything flaky for 1 release) is a p0, I can fix it if you don't mind dealing with the flakes till I've cleared things off my plate. I'd be more than happy to review a fix.\nWell, it's a P0 because test is flaky. E.g last run:\n- did you have chance to look into it?\nThe \"DaemonRestart\" tests are still labeled at . I will send a PR to promote them.\nare you still planning to send aPR to promote the DaemonRestart tests out of flaky? I think this is the right thing to do.\nAck; sorry, I lost this in the muck.", "commid": "kubernetes_issue_17829", "tokennum": 444}], "negative_passages": []}
{"query_id": "q-en-kubernetes-62fe9de90eb340bf529cbe0e17b4f66ceeedebf0618fa419b4e9347d0a763e19", "query": "}) It(\"should write files of various sizes, verify size, validate content\", func() { fileSizes := []int64{1 * framework.MiB, 100 * framework.MiB} fileSizes := []int64{fileSizeSmall, fileSizeMedium} fsGroup := int64(1234) podSec := v1.PodSecurityContext{ FSGroup: &fsGroup,", "positive_passages": [{"docid": "doc-en-kubernetes-540712d0b27ee7f52d7d02bd3719013304c1ce42c6b5d8b93c3ee3671a9c1c7c", "text": "The \"Volume plugin streaming [Slow] NFS should write files of various sizes, verify size, validate content\" e2e test is failing on our 2k-node gce clusters (https://k8s-) with the following error: cc\nCould you take a look into it or reassign as apt? This is part of the correctness suite and large cluster tests are release-blocking.\n/assign\nis this run using the kubemark environment? I'll also take a look. It could be that the commands this test needs to execute may not work with kubemark. /assign\ncc\nNo, this is not kubemark. This is on a real 2000-node cluster.\nAlso all other volume-related e2e tests are passing, except this one.\nLooking at the log I see that the nfs test fails trying to verify the test file's content. The test exec's into the nfs client pod and executes: The return code from above is 127. The test then deletes the nfs client pod calling which works. The nfs-server pod is deleted in the AfterEach. For some reason the reported error is 137 but the returned err 127. The same command is exec'd for all of the tested storage plugins so not sure why it's failing just for nfs. Error 127 might be \"command not found\".\nThe other strange thing is that it's only failing in this suite and not the others.\nWhen I look at the kubelet log, I see the exec command coming through: I also see these suspicious kubelet soon afterwards: So I think the grep command is causing the node to run out of memory and get evicted. I think we can fix this either by: Making the grep command more memory efficient. I'm guessing it's reading the whole file into memory or, Make this test Serial so that it doesn't run in parallel with other pods, since it consumes a lot of memory.\nAn unrelated issue, is that the node allocatable feature didn't actually evict the pod. Instead this pod got a system OOM kill. /cc\nrelated to the memcg notifications, not node allocatable. Thanks!\nany theories why we see this only for the nfs-io tests, and not the glusterfs-io test for instance?\nthe gluster test didn't run on this in this suite.", "commid": "kubernetes_issue_51717", "tokennum": 516}], "negative_passages": []}
{"query_id": "q-en-kubernetes-62fe9de90eb340bf529cbe0e17b4f66ceeedebf0618fa419b4e9347d0a763e19", "query": "}) It(\"should write files of various sizes, verify size, validate content\", func() { fileSizes := []int64{1 * framework.MiB, 100 * framework.MiB} fileSizes := []int64{fileSizeSmall, fileSizeMedium} fsGroup := int64(1234) podSec := v1.PodSecurityContext{ FSGroup: &fsGroup,", "positive_passages": [{"docid": "doc-en-kubernetes-fd4ca8cfd935058701597731533334687f2913bf0b9d409a83f3ac801f0182e1", "text": "I'm suspecting that the reason why we didn't see a problem with this test on other suites is that the load could be higher here. I've asked to see if we can reduce the memory usage of the grep call.\npointed out that there are no newlines in the generated file, so when grep is called it could buffer the entire file in memory. Adding a newline at the end of the dd input file should do it.\n/priority critical-urgent\n[MILESTONENOTIFIER] Milestone Labels Complete Issue label settings: sig/scalability sig/storage: Issue will be escalated to these SIGs if needed. priority/critical-urgent: Never automatically move out of a release milestone; continually escalate to contributor and SIG through all available channels. kind/bug: Fixes a bug discovered during the current release. disks[i].ToBeDetached = to.BoolPtr(true) if strings.EqualFold(as.cloud.Environment.Name, \"AZURESTACKCLOUD\") { disks = append(disks[:i], disks[i+1:]...) } else { disks[i].ToBeDetached = to.BoolPtr(true) } bFoundDisk = true break }", "positive_passages": [{"docid": "doc-en-kubernetes-5bed2d3142dabe6e3fc7269ef850324d324032cbc87a456c8f2e9b79085fd590", "text": " ", "positive_passages": [{"docid": "doc-en-kubernetes-7c5448078f172c3dba4d1f6f6ed5d66f21d2fc0ea9dd38dbb1ed9b4cc49eea15", "text": "Right now, we have a mixture of user documentation and developer documentation in the top-level directory. We should make it easier to find a just user documentation. Step 1 would be to move developer documentation into a subdir like docs/development or something like that. User docs would stay in the top-level docs dir.\nmove to", "commid": "kubernetes_issue_1797", "tokennum": 71}], "negative_passages": []}
{"query_id": "q-en-kubernetes-a62793a3b95d6154d92dbb5c2a9dce87a97b4035747cff8d90565e03c4cdd943", "query": "// If the ordinal could not be parsed (ord < 0), ignore the Pod. } // make sure to update the latest status even if there is an error later defer func() { // update the set's status statusErr := ssc.updateStatefulSetStatus(ctx, set, &status) if statusErr == nil { klog.V(4).InfoS(\"Updated status\", \"statefulSet\", klog.KObj(set), \"replicas\", status.Replicas, \"readyReplicas\", status.ReadyReplicas, \"currentReplicas\", status.CurrentReplicas, \"updatedReplicas\", status.UpdatedReplicas) } else if updateErr == nil { updateErr = statusErr } else { klog.V(4).InfoS(\"Could not update status\", \"statefulSet\", klog.KObj(set), \"err\", statusErr) } }() // for any empty indices in the sequence [0,set.Spec.Replicas) create a new Pod at the correct revision for ord := 0; ord < replicaCount; ord++ { if replicas[ord] == nil {", "positive_passages": [{"docid": "doc-en-kubernetes-0cad02d163e61aff3cc2c1257b68701a42fb1145e0b51f6816fd9a85adca78e6", "text": "I create a ResourceQuota with two pods limit in my namespace. Then, I create a statefuleSet with 3 replicas. As expected, two pods are created successfully, and the third pod failed to create. But, the ReadyReplicas is not 2 ,but 1. The ReadyReplicas is correct 1Create a ResourceQuota with two pods limit in a new namespace. 2Create a statefulSet with 3 replicas 3Check if the ReadyReplicas of statefulSet matchs the number of stateful running pods. 1According to the logic of the statefulSet controller, new pod created in current sync is not be immediately counted into the ReadyReplicas. 2Failure to create pod will cause to return error, then skip updating status. // When these values are updated, also update cmd/kubelet/app/options/options.go Pause = ImageConfig{gcRegistry, \"pause\", \"3.0\", true} // When these values are updated, also update cmd/kubelet/app/options/container_runtime.go Pause = ImageConfig{gcRegistry, \"pause\", \"3.1\", true} Porter = ImageConfig{e2eRegistry, \"porter\", \"1.0\", true} PortForwardTester = ImageConfig{e2eRegistry, \"port-forward-tester\", \"1.0\", true} Redis = ImageConfig{e2eRegistry, \"redis\", \"1.0\", true}", "positive_passages": [{"docid": "doc-en-kubernetes-a834f77ddd825b3d04a660aa7c11d0734893e4dcde6e0d4a7399e3b3daf56428", "text": " []()", "positive_passages": [{"docid": "doc-en-kubernetes-b8cb95fef1c3c524a6b804cd28cd2236820a76e473144ffd27898d08f0344824", "text": "Need to finish so the client allows you to select an init container, and verify the kubelet works correctly.\n(this is a nice intersection of new in 1.3, node and usability, if anyone wants to give it a shot. Pretty sure Clayton is busier than I am and I don't think I'll get to it)\nWill be happy to review any changes - should be fairly straightforward.\nYes, init containers are \"run to completion\" always. , harryz wrote:\nIIUC when the init containers of pod have not complete, the pod should be in status, which means will fail, if init containers run to completion, they will exit, still fail. I don't know what you mean exec should work on init containers?\nthe init container is still up and running even when the pod is in pending, we should be able to exec and debug why it's hung, for example.\nadd a straightforward exec implementation ptal.", "commid": "kubernetes_issue_25818", "tokennum": 207}], "negative_passages": []}
{"query_id": "q-en-kubernetes-af01767c5412f46657ddb2037870634c72880837b2f8b92e4eb3083de1ae0946", "query": "result.User = os.Getenv(\"USER\") } if bastion := os.Getenv(\"KUBE_SSH_BASTION\"); len(bastion) > 0 { if bastion := os.Getenv(sshBastionEnvKey); len(bastion) > 0 { stdout, stderr, code, err := runSSHCommandViaBastion(cmd, result.User, bastion, host, signer) result.Stdout = stdout result.Stderr = stderr", "positive_passages": [{"docid": "doc-en-kubernetes-375711dd290c57e850707ef6745091e3080fe654060082beb69e30fe80db0e88", "text": "ci-kubernetes-e2e-gce-scale-correctness Kubernetes e2e suite: [sig-node] SSH should SSH to all nodes and run commands kubetest.Test Jul 10 14:01:51 UTC+0200 Kubernetes e2e suite: [sig-node] SSH should SSH to all nodes and run commands: more . Test /sig node\n/cc\n/triage accepted\nUhhh, this may help here\n/milestone v1.22\n/sig scalability FYI\nI got annoyed that I couldn't click through to see the other logs, so now you can, ref: (spyglass doesn't render build- because it's 500MB, and the limit for rendering is 100MB)\n/assign SIG Scalability chairs/TLs, please find someone to address or call this a non-release-blocking failure\nAssigning to current scalability oncall (): /assign\nThis is sig-node issue - should be addres by SIG-node folks. /sig node\nis being discussed as a fix for this and I'm not sure it's correct\nis it sig scalability's intent that 5k node clusters should support SSH?\nThe SSH test in question passes in the \"default\" job: So the fact that this failure is unique to a scalability job means to me that scalability needs to own at least some part of this\nI did a little bit of digging and ci-kubernetes-e2e-gce-scale-correctness uses option in kubetest, which according to seems to be setting KUBESSHBASTION env. So I believe this is main difference between our scalability job and \"default\".\nDebugging the test in and providing a fix there. The crictl host PR () is doing something different.", "commid": "kubernetes_issue_103688", "tokennum": 400}], "negative_passages": []}
{"query_id": "q-en-kubernetes-af3b62fd3d98e7596533a817bb4d172662e9f3b64841679a414460a8bfe15ef5", "query": "}) matchExpectations := ptrMatchAllFields(gstruct.Fields{ \"Node\": gstruct.MatchAllFields(gstruct.Fields{ \"NodeName\": Equal(framework.TestContext.NodeName), \"StartTime\": recent(maxStartAge), \"SystemContainers\": gstruct.MatchElements(summaryObjectID, gstruct.IgnoreExtras, gstruct.Elements{ \"kubelet\": sysContExpectations, \"runtime\": sysContExpectations, }), \"NodeName\": Equal(framework.TestContext.NodeName), \"StartTime\": recent(maxStartAge), \"SystemContainers\": gstruct.MatchAllElements(summaryObjectID, systemContainers), \"CPU\": ptrMatchAllFields(gstruct.Fields{ \"Time\": recent(maxStatsAge), \"UsageNanoCores\": bounded(100E3, 2E9),", "positive_passages": [{"docid": "doc-en-kubernetes-237446a2101b1d0e3896762c3997d0b07e17e89bcd32bef5edcdc8a493c92d82", "text": "Kubernetes version (use ): 6 Environment: GCE What happened: I created a cluster for Heapster testing purposes to debug failing test in It turned out that Summary API does not export system/misc stats in SystemContainers section (though there are kubelet and docker stats). Everything works as expected on nodes. See also https://k8s-\nWhat node OS were you using? We don't create the system (misc) container on systemd, so it's expected that GCI nodes don't report those stats.\nThe default configuration for 1.4, which is: master: GCI nodes: Container VM\nAre the nodes running 1.4.6 too? There was a bug with this in 1.4.5, but it should have been fixed in 1.4.6.\nYes, both are running in 1.4.6. If you want to reproduce from Heapster current head run . It will take some time and as a side effect you should have a cluster with the mentioned problem. This was catch by integration tests and I temporary disabled the check here\nif you have spare cycles, can you look into this?\nfriendly ping\nLooking at it now\nI have found a couple things so far. On the containerVM nodes, cadvisor is exporting a container with \"name\": \"/system\" On the GCI Master, cadvisor is exporting a container with \"name\": \"\" The has this: \"{% set system_container = \"--system-cgroups=/system\" -%}\" (/system is the default misc container name) So the real question is: Should cadvisor export \"/system\" instead of \"\" on GCI? Or should we be configuring GCI to use \"--system-\"?\nI thought we fix this very issue for 1.4 branch by a while back. Are you running into the same issue with 1.5?\nI think I told you the wrong thing earlier. We don't support the misc (/system) cgroup on systemd systems (I can't remember why, but it was an explicit decision - or might know). The is something different, I believe a base systemd cgroup (slice). As for the misc container on ContainerVM nodes, we've been testing for the misc container in 1.5+ in the summary api test (https://k8s-) for a while, and I haven't seen this issue.", "commid": "kubernetes_issue_37453", "tokennum": 538}], "negative_passages": []}
{"query_id": "q-en-kubernetes-af3b62fd3d98e7596533a817bb4d172662e9f3b64841679a414460a8bfe15ef5", "query": "}) matchExpectations := ptrMatchAllFields(gstruct.Fields{ \"Node\": gstruct.MatchAllFields(gstruct.Fields{ \"NodeName\": Equal(framework.TestContext.NodeName), \"StartTime\": recent(maxStartAge), \"SystemContainers\": gstruct.MatchElements(summaryObjectID, gstruct.IgnoreExtras, gstruct.Elements{ \"kubelet\": sysContExpectations, \"runtime\": sysContExpectations, }), \"NodeName\": Equal(framework.TestContext.NodeName), \"StartTime\": recent(maxStartAge), \"SystemContainers\": gstruct.MatchAllElements(summaryObjectID, systemContainers), \"CPU\": ptrMatchAllFields(gstruct.Fields{ \"Time\": recent(maxStatsAge), \"UsageNanoCores\": bounded(100E3, 2E9),", "positive_passages": [{"docid": "doc-en-kubernetes-c190eb06e0eb058949f528f1af5886fe17cab35cd1b33a69ea2cddc6a3fc8812", "text": "So if this is still reproducible, it's limited to 1.4. See this the bug was created for the post-fix.\nFound the explanation here: The should use the flag to know the location that has the processes that are associated with , but it should not modify the cgroups of existing processes on the system during bootstrapping of the node. This is because is the on the host and it has not delegated authority to the to change how it manages .\nIIUC this is intended behavior on systemd nodes. Based on the explanation provided, the kubelet doesnt have the authority to control \"misc\" processes on systemd.\n+1. \"misc\" is just a bandaid for cvm. , David Ashpole <:\nOk, so I'm closing this issue.", "commid": "kubernetes_issue_37453", "tokennum": 168}], "negative_passages": []}
{"query_id": "q-en-kubernetes-af4cfab2b32b0c92f8a245912327bde3c68d070cd4d7f585f1cfde5048b071a9", "query": "\"errors\" \"testing\" \"k8s.io/api/core/v1\" v1 \"k8s.io/api/core/v1\" \"k8s.io/apimachinery/pkg/types\" \"k8s.io/kubernetes/pkg/volume\" volumetest \"k8s.io/kubernetes/pkg/volume/testing\"", "positive_passages": [{"docid": "doc-en-kubernetes-e1df151812a738ca8d7e9f97519d671c643638f3ca9d9400cc430f5f7a7d204c", "text": "When user creates a static PV using folder name in path: on vSAN backed datastore then detach disk does not work because check fails - The code expects folder name to be UUID but it does not and hence it determines disk is not attached to the node and hence no detach is ever performed. /sig storage cc", "commid": "kubernetes_issue_95121", "tokennum": 68}], "negative_passages": []}
{"query_id": "q-en-kubernetes-af532b0662b3fb32cd8400ce4524f4bfbf5ac5caf8455958531f029eddce26bd", "query": "\"//staging/src/k8s.io/apimachinery/pkg/apis/meta/v1:go_default_library\", \"//staging/src/k8s.io/apimachinery/pkg/util/net:go_default_library\", \"//staging/src/k8s.io/apimachinery/pkg/util/runtime:go_default_library\", \"//staging/src/k8s.io/apimachinery/pkg/util/sets:go_default_library\", \"//staging/src/k8s.io/apimachinery/pkg/util/wait:go_default_library\", \"//staging/src/k8s.io/client-go/kubernetes/typed/core/v1:go_default_library\", \"//staging/src/k8s.io/client-go/tools/record:go_default_library\",", "positive_passages": [{"docid": "doc-en-kubernetes-8c9ebf4e41021efd0692132f5b2f5e7ec08451127a302fc405c42cb972b9ed2e", "text": "
PLEASE NOTE: This document applies to the HEAD of the source tree
If you are using a released version of Kubernetes, you should refer to the docs that go with that version. Documentation for other releases can be found at [releases.k8s.io](http://releases.k8s.io). -- # Generic Configuration Object ## Abstract The `ConfigMap` API resource stores data used for the configuration of applications deployed on Kubernetes. The main focus of this resource is to: * Provide dynamic distribution of configuration data to deployed applications. * Encapsulate configuration information and simplify `Kubernetes` deployments. * Create a flexible configuration model for `Kubernetes`. ## Motivation A `Secret`-like API resource is needed to store configuration data that pods can consume. Goals of this design: 1. Describe a `ConfigMap` API resource 2. Describe the semantics of consuming `ConfigMap` as environment variables 3. Describe the semantics of consuming `ConfigMap` as files in a volume ## Use Cases 1. As a user, I want to be able to consume configuration data as environment variables 2. As a user, I want to be able to consume configuration data as files in a volume 3. As a user, I want my view of configuration data in files to be eventually consistent with changes to the data ### Consuming `ConfigMap` as Environment Variables Many programs read their configuration from environment variables. `ConfigMap` should be possible to consume in environment variables. The rough series of events for consuming `ConfigMap` this way is: 1. A `ConfigMap` object is created 2. A pod that consumes the configuration data via environment variables is created 3. The pod is scheduled onto a node 4. The kubelet retrieves the `ConfigMap` resource(s) referenced by the pod and starts the container processes with the appropriate data in environment variables ### Consuming `ConfigMap` in Volumes Many programs read their configuration from configuration files. `ConfigMap` should be possible to consume in a volume. The rough series of events for consuming `ConfigMap` this way is: 1. A `ConfigMap` object is created 2. A new pod using the `ConfigMap` via the volume plugin is created 3. The pod is scheduled onto a node 4. The Kubelet creates an instance of the volume plugin and calls its `Setup()` method 5. The volume plugin retrieves the `ConfigMap` resource(s) referenced by the pod and projects the appropriate data into the volume ### Consuming `ConfigMap` Updates Any long-running system has configuration that is mutated over time. Changes made to configuration data must be made visible to pods consuming data in volumes so that they can respond to those changes. The `resourceVersion` of the `ConfigMap` object will be updated by the API server every time the object is modified. After an update, modifications will be made visible to the consumer container: 1. A `ConfigMap` object is created 2. A new pod using the `ConfigMap` via the volume plugin is created 3. The pod is scheduled onto a node 4. During the sync loop, the Kubelet creates an instance of the volume plugin and calls its `Setup()` method 5. The volume plugin retrieves the `ConfigMap` resource(s) referenced by the pod and projects the appropriate data into the volume 6. The `ConfigMap` referenced by the pod is updated 7. During the next iteration of the `syncLoop`, the Kubelet creates an instance of the volume plugin and calls its `Setup()` method 8. The volume plugin projects the updated data into the volume atomically It is the consuming pod's responsibility to make use of the updated data once it is made visible. Because environment variables cannot be updated without restarting a container, configuration data consumed in environment variables will not be updated. ### Advantages * Easy to consume in pods; consumer-agnostic * Configuration data is persistent and versioned * Consumers of configuration data in volumes can respond to changes in the data ## Proposed Design ### API Resource The `ConfigMap` resource will be added to the main API: ```go package api // ConfigMap holds configuration data for pods to consume. type ConfigMap struct { TypeMeta `json:\",inline\"` ObjectMeta `json:\"metadata,omitempty\"` // Data contains the configuration data. Each key must be a valid DNS_SUBDOMAIN or leading // dot followed by valid DNS_SUBDOMAIN. Data map[string]string `json:\"data,omitempty\"` } type ConfigMapList struct { TypeMeta `json:\",inline\"` ListMeta `json:\"metadata,omitempty\"` Items []ConfigMap `json:\"items\"` } ``` A `Registry` implementation for `ConfigMap` will be added to `pkg/registry/configmap`. ### Environment Variables The `EnvVarSource` will be extended with a new selector for `ConfigMap`: ```go package api // EnvVarSource represents a source for the value of an EnvVar. type EnvVarSource struct { // other fields omitted // Specifies a ConfigMap key ConfigMap *ConfigMapSelector `json:\"configMap,omitempty\"` } // ConfigMapSelector selects a key of a ConfigMap. type ConfigMapSelector struct { // The name of the ConfigMap to select a key from. ConfigMapName string `json:\"configMapName\"` // The key of the ConfigMap to select. Key string `json:\"key\"` } ``` ### Volume Source A new `ConfigMapVolumeSource` type of volume source containing the `ConfigMap` object will be added to the `VolumeSource` struct in the API: ```go package api type VolumeSource struct { // other fields omitted ConfigMap *ConfigMapVolumeSource `json:\"configMap,omitempty\"` } // Represents a volume that holds configuration data. type ConfigMapVolumeSource struct { LocalObjectReference `json:\",inline\"` // A list of keys to project into the volume. // If unspecified, each key-value pair in the Data field of the // referenced ConfigMap will be projected into the volume as a file whose name // is the key and content is the value. // If specified, the listed keys will be project into the specified paths, and // unlisted keys will not be present. Items []KeyToPath `json:\"items,omitempty\"` } // Represents a mapping of a key to a relative path. type KeyToPath struct { // The name of the key to select Key string `json:\"key\"` // The relative path name of the file to be created. // Must not be absolute or contain the '..' path. Must be utf-8 encoded. // The first item of the relative path must not start with '..' Path string `json:\"path\"` } ``` **Note:** The update logic used in the downward API volume plug-in will be extracted and re-used in the volume plug-in for `ConfigMap`. ### Changes to Secret We will update the Secret volume plugin to have a similar API to the new ConfigMap volume plugin. The secret volume plugin will also begin updating secret content in the volume when secrets change. ## Examples #### Consuming `ConfigMap` as Environment Variables ```yaml apiVersion: v1 kind: ConfigMap metadata: name: etcd-env-config data: number-of-members: 1 initial-cluster-state: new initial-cluster-token: DUMMY_ETCD_INITIAL_CLUSTER_TOKEN discovery-token: DUMMY_ETCD_DISCOVERY_TOKEN discovery-url: http://etcd-discovery:2379 etcdctl-peers: http://etcd:2379 ``` This pod consumes the `ConfigMap` as environment variables: ```yaml apiVersion: v1 kind: Pod metadata: name: config-env-example spec: containers: - name: etcd image: openshift/etcd-20-centos7 ports: - containerPort: 2379 protocol: TCP - containerPort: 2380 protocol: TCP env: - name: ETCD_NUM_MEMBERS valueFrom: configMap: configMapName: etcd-env-config key: number-of-members - name: ETCD_INITIAL_CLUSTER_STATE valueFrom: configMap: configMapName: etcd-env-config key: initial-cluster-state - name: ETCD_DISCOVERY_TOKEN valueFrom: configMap: configMapName: etcd-env-config key: discovery-token - name: ETCD_DISCOVERY_URL valueFrom: configMap: configMapName: etcd-env-config key: discovery-url - name: ETCDCTL_PEERS valueFrom: configMap: configMapName: etcd-env-config key: etcdctl-peers ``` #### Consuming `ConfigMap` as Volumes `redis-volume-config` is intended to be used as a volume containing a config file: ```yaml apiVersion: extensions/v1beta1 kind: ConfigMap metadata: name: redis-volume-config data: redis.conf: \"pidfile /var/run/redis.pidnport6379ntcp-backlog 511n databases 1ntimeout 0n\" ``` The following pod consumes the `redis-volume-config` in a volume: ```yaml apiVersion: v1 kind: Pod metadata: name: config-volume-example spec: containers: - name: redis image: kubernetes/redis command: \"redis-server /mnt/config-map/etc/redis.conf\" ports: - containerPort: 6379 volumeMounts: - name: config-map-volume mountPath: /mnt/config-map volumes: - name: config-map-volume configMap: name: redis-volume-config items: - path: \"etc/redis.conf\" key: redis.conf ``` ## Future Improvements In the future, we may add the ability to specify an init-container that can watch the volume contents for updates and respond to changes when they occur. []() ", "positive_passages": [{"docid": "doc-en-kubernetes-ab39f39ee12e84e818aa9c92c6877733f255a4fad832176aa0525d5bb0051663", "text": "What happened: Visited It shows the release notes for v1.17 What you expected to happen: See the release notes for v1.18 How to reproduce it (as minimally and precisely as possible): Open in any browser or with curl\n/sig docs\n/assign", "commid": "kubernetes_issue_89598", "tokennum": 57}], "negative_passages": []}
{"query_id": "q-en-kubernetes-b3e0989c22fd7b6ccabce082f989f2c02ef4ecfa2523df8b13aecd09bbe30fd2", "query": "}) }) Describe(\"[Skipped][Example]Liveness\", func() { It(\"liveness pods should be automatically restarted\", func() { mkpath := func(file string) string { return filepath.Join(testContext.RepoRoot, \"docs\", \"user-guide\", \"liveness\", file) } execYaml := mkpath(\"exec-liveness.yaml\") httpYaml := mkpath(\"http-liveness.yaml\") nsFlag := fmt.Sprintf(\"--namespace=%v\", ns) runKubectl(\"create\", \"-f\", execYaml, nsFlag) runKubectl(\"create\", \"-f\", httpYaml, nsFlag) checkRestart := func(podName string, timeout time.Duration) { err := waitForPodRunningInNamespace(c, podName, ns) Expect(err).NotTo(HaveOccurred()) for t := time.Now(); time.Since(t) < timeout; time.Sleep(poll) { pod, err := c.Pods(ns).Get(podName) expectNoError(err, fmt.Sprintf(\"getting pod %s\", podName)) restartCount := api.GetExistingContainerStatus(pod.Status.ContainerStatuses, \"liveness\").RestartCount Logf(\"Pod: %s restart count:%d\", podName, restartCount) if restartCount > 0 { return } } Failf(\"Pod %s was not restarted\", podName) } By(\"Check restarts\") checkRestart(\"liveness-exec\", time.Minute) checkRestart(\"liveness-http\", time.Minute) }) }) }) func makeHttpRequestToService(c *client.Client, ns, service, path string) (string, error) {", "positive_passages": [{"docid": "doc-en-kubernetes-efbe4c5d01aeff5665bc84249f5b94afd1b0f5aa50522bcb417b43505717780e", "text": "! ! Which jobs are failing: Which test(s) are failing: Since when has it been failing: failures started between 3-4PM (EST), on 1/11/2019 the failures appear limited to containerd jobs (here's a triage search for note that only containerd jobs appear as failing): /sig node\nthe relevant commit range on master appears to be ... the relevent commit range in test-infra appears to be\ncc\nfor test-infra: aws-janitor: clean up ELBs - definitely unrelated, only affects some of the AWS testing cleanup Update kubekins tag for eks jobs - definitely unrelated, affects the docker image in which some EKS testing runs only Update ubuntu image used for e2e node test - I think unrelated, appears to only affect the node e2e tests rest are merge commits\nthanks for checking. it's also possible the commit ranges I linked to need to be expanded slightly... that was my first attempt at correlating merge timestamps with the triage board failures... it's possible I got timezone correlation wrong a few hours in one direction or the other.\nah, they probably do then. otherwise I'd say something else must have changed out of band (perhaps there's a pointer to a containerd version to use somewhere or something).\nflakes or was something fixed offline (gce / gci / gke)? https://k8s- https://k8s- these turned (mostly) green without k/k and k/test-infra changes.\nHuh. Disconcerting, but ok. /close\nClosing this issue. func TestClearUDPConntrackForPortNAT(t *testing.T) { fcmd := fakeexec.FakeCmd{ CombinedOutputScript: []fakeexec.FakeCombinedOutputAction{ func() ([]byte, error) { return []byte(\"1 flow entries have been deleted\"), nil }, func() ([]byte, error) { return []byte(\"\"), fmt.Errorf(\"conntrack v1.4.2 (conntrack-tools): 0 flow entries have been deleted\") }, func() ([]byte, error) { return []byte(\"1 flow entries have been deleted\"), nil }, }, } fexec := fakeexec.FakeExec{ CommandScript: []fakeexec.FakeCommandAction{ func(cmd string, args ...string) exec.Cmd { return fakeexec.InitFakeCmd(&fcmd, cmd, args...) }, func(cmd string, args ...string) exec.Cmd { return fakeexec.InitFakeCmd(&fcmd, cmd, args...) }, func(cmd string, args ...string) exec.Cmd { return fakeexec.InitFakeCmd(&fcmd, cmd, args...) }, }, LookPathFunc: func(cmd string) (string, error) { return cmd, nil }, } testCases := []struct { name string port int dest string }{ { name: \"IPv4 success\", port: 30211, dest: \"1.2.3.4\", }, } svcCount := 0 for i, tc := range testCases { err := ClearEntriesForPortNAT(&fexec, tc.dest, tc.port, v1.ProtocolUDP) if err != nil { t.Errorf(\"%s test case: unexpected error: %v\", tc.name, err) } expectCommand := fmt.Sprintf(\"conntrack -D -p udp --dport %d --dst-nat %s\", tc.port, tc.dest) + familyParamStr(utilnet.IsIPv6String(tc.dest)) execCommand := strings.Join(fcmd.CombinedOutputLog[i], \" \") if expectCommand != execCommand { t.Errorf(\"%s test case: Expect command: %s, but executed %s\", tc.name, expectCommand, execCommand) } svcCount++ } if svcCount != fexec.CommandCalls { t.Errorf(\"Expect command executed %d times, but got %d\", svcCount, fexec.CommandCalls) } } ", "positive_passages": [{"docid": "doc-en-kubernetes-9e39524d0e4c89f1ddf9c516fe4e1c471a25826ee6b222abfe1f6c0cc213064c", "text": " %v:%v (config.clusterIP)\", config.TestContainerPod.Name, config.ClusterIP, e2enetwork.ClusterUDPPort)) err = config.DialFromTestContainer(\"udp\", config.ClusterIP, e2enetwork.ClusterUDPPort, config.MaxTries, 0, config.EndpointHostnames()) if err != nil { framework.Failf(\"failed dialing endpoint, %v\", err) } ginkgo.By(fmt.Sprintf(\"dialing(udp) %v --> %v:%v (nodeIP)\", config.TestContainerPod.Name, config.NodeIP, config.NodeUDPPort)) err = config.DialFromTestContainer(\"udp\", config.NodeIP, config.NodeUDPPort, config.MaxTries, 0, config.EndpointHostnames()) if err != nil { framework.Failf(\"failed dialing endpoint, %v\", err) } ginkgo.By(\"node-Service(hostNetwork): http\") ginkgo.By(fmt.Sprintf(\"dialing(http) %v (node) --> %v:%v (config.clusterIP)\", config.NodeIP, config.ClusterIP, e2enetwork.ClusterHTTPPort))", "positive_passages": [{"docid": "doc-en-kubernetes-15fd873e113c2b87c10057fb05584532c0b0add34bb5504a54b7699060ebd74c", "text": " []() No newline at end of file", "positive_passages": [{"docid": "doc-en-kubernetes-0f2f59a4098799df231d0f4ddd6804f8b020392d4e4aa17c98fa7a6b4bb4eccd", "text": "The heapster needs to be installed if end user want to use autoscaler feature, we should mention this in document.", "commid": "kubernetes_issue_17466", "tokennum": 25}], "negative_passages": []}
{"query_id": "q-en-kubernetes-fe50a90f73b1cb6aefb125036495938af3a51ac1cc7fea157d4be14411b2fa04", "query": "return nil } func (ctrl *controller) getParameters(ctx context.Context, claim *resourcev1alpha2.ResourceClaim, class *resourcev1alpha2.ResourceClass) (claimParameters, classParameters interface{}, err error) { func (ctrl *controller) getParameters(ctx context.Context, claim *resourcev1alpha2.ResourceClaim, class *resourcev1alpha2.ResourceClass, notifyClaim bool) (claimParameters, classParameters interface{}, err error) { classParameters, err = ctrl.driver.GetClassParameters(ctx, class) if err != nil { ctrl.eventRecorder.Event(class, v1.EventTypeWarning, \"Failed\", err.Error()) err = fmt.Errorf(\"class parameters %s: %v\", class.ParametersRef, err) return } claimParameters, err = ctrl.driver.GetClaimParameters(ctx, claim, class, classParameters) if err != nil { if notifyClaim { ctrl.eventRecorder.Event(claim, v1.EventTypeWarning, \"Failed\", err.Error()) } err = fmt.Errorf(\"claim parameters %s: %v\", claim.Spec.ParametersRef, err) return }", "positive_passages": [{"docid": "doc-en-kubernetes-e8eec0a3f7a81297ed919db6952e73e9801661fbfe386a524b19b8189beefe88", "text": "When ResourceClaimParameters or ResourceClassParameters fail validation in resource driver during PodSchedulingContext sync loop, the error is visible only in the events of the PodSchedulingContext. The error should be visible on the object that fails validation as well, ResourceClaim or ResourceClass respectively. Creating ResourceClaim with unsupported parameters' values or unsupported No response # The Kubernetes API Primary system and API concepts are documented in the [User guide](user-guide.md). Overall API conventions are described in the [API conventions doc](api-conventions.md). Complete API details are documented via [Swagger](http://swagger.io/). The Kubernetes apiserver (aka \"master\") exports an API that can be used to retrieve the [Swagger spec](https://github.com/swagger-api/swagger-spec/tree/master/schemas/v1.2) for the Kubernetes API, by default at `/swaggerapi`, and a UI you can use to browse the API documentation at `/swaggerui`. We also periodically update a [statically generated UI](http://kubernetes.io/third_party/swagger-ui/). Remote access to the API is discussed in the [access doc](accessing_the_api.md). The Kubernetes API also serves as the foundation for the declarative configuration schema for the system. The [Kubectl](kubectl.md) command-line tool can be used to create, update, delete, and get API objects. Kubernetes also stores its serialized state (currently in [etcd](https://coreos.com/docs/distributed-configuration/getting-started-with-etcd/)) in terms of the API resources. Kubernetes itself is decomposed into multiple components, which interact through its API. ## API changes In our experience, any system that is successful needs to grow and change as new use cases emerge or existing ones change. Therefore, we expect the Kubernetes API to continuously change and grow. However, we intend to not break compatibility with existing clients, for an extended period of time. In general, new API resources and new resource fields can be expected to be added frequently. Elimination of resources or fields will require following a deprecation process. The precise deprecation policy for eliminating features is TBD, but once we reach our 1.0 milestone, there will be a specific policy. What constitutes a compatible change and how to change the API are detailed by the [API change document](devel/api_changes.md). ## API versioning Fine-grain resource evolution alone makes it difficult to eliminate fields or restructure resource representations. Therefore, Kubernetes supports multiple API versions, each at a different API path prefix, such as `/api/v1beta3`. These are simply different interfaces to read and/or modify the same underlying resources. In general, all API resources are accessible via all API versions, though there may be some cases in the future where that is not true. Distinct API versions present more clear, consistent views of system resources and behavior than intermingled, independently evolved resources. They also provide a more straightforward mechanism for controlling access to end-of-lifed and/or experimental APIs. The [API and release versioning proposal](versioning.md) describes the current thinking on the API version evolution process. ## v1beta1 and v1beta2 are deprecated; please move to v1beta3 ASAP As of April 1, 2015, the Kubernetes v1beta3 API has been enabled by default, and the v1beta1 and v1beta2 APIs are deprecated. v1beta3 should be considered the v1 release-candidate API, and the v1 API is expected to be substantially similar. As \"pre-release\" APIs, v1beta1, v1beta2, and v1beta3 will be eliminated once the v1 API is available, by the end of June 2015. ## v1beta3 conversion tips We're working to convert all documentation and examples to v1beta3. Most examples already contain a v1beta3 subdirectory with the API objects translated to v1beta3. A simple [API conversion tool](cluster_management.md#switching-your-config-files-to-a-new-api-version) has been written to simplify the translation process. Use `kubectl create --validate` in order to validate your json or yaml against our Swagger spec. Some important differences between v1beta1/2 and v1beta3: * The resource `id` is now called `name`. * `name`, `labels`, `annotations`, and other metadata are now nested in a map called `metadata` * `desiredState` is now called `spec`, and `currentState` is now called `status` * `/minions` has been moved to `/nodes`, and the resource has kind `Node` * The namespace is required (for all namespaced resources) and has moved from a URL parameter to the path: `/api/v1beta3/namespaces/{namespace}/{resource_collection}/{resource_name}` * The names of all resource collections are now lower cased - instead of `replicationControllers`, use `replicationcontrollers`. * To watch for changes to a resource, open an HTTP or Websocket connection to the collection URL and provide the `?watch=true` URL parameter along with the desired `resourceVersion` parameter to watch from. * The container `entrypoint` has been renamed to `command`, and `command` has been renamed to `args`. * Container, volume, and node resources are expressed as nested maps (e.g., `resources{cpu:1}`) rather than as individual fields, and resource values support [scaling suffixes](resources.md#resource-quantities) rather than fixed scales (e.g., milli-cores). * Restart policy is represented simply as a string (e.g., \"Always\") rather than as a nested map (\"always{}\"). * The volume `source` is inlined into `volume` rather than nested. ", "positive_passages": [{"docid": "doc-en-kubernetes-c802a37aad85dfdce2d6232f30f1e039273d58e19f8aa63b8acb725eeb2b6040", "text": "it's impossible to GET replication controllers on v3. on v1 on the same server it works fine. example request: GET on http://localhost:8080/api/v1beta3/replicationControllers returns 404 http://localhost:8080/api/v1beta1/replicationControllers returns 200\nv1beta3 moved to all lower case URL Would give that a try first Sent from my iPhone\nlooks like it's working. but it breaks backwards for clients who worked with v1. For instance, in my gem I can't now smoothly work with both versions because I need to treat the name of the resource differently. why don't at least redirect those who call replicationControllers to the lowercase too? Also, there's no way to know which resources are available and their names. A lot of REST api on their most top level at least publish the links to main resources, but if I call: http://localhost:8080/api/v1beta3/ it returns 404\nBecause we are actively trying to drop support for mixed case prior to 1.0, I did not continue supporting the old url paths. If you look at the go client, we have a \"legacy\" flag we use to control this and a few other behaviors.\nwell yes, but breaking backwards creates more work for the users... more \"if this version ...\" kind of stuff.\nWe got v1bet1/2 wrong and don't want to carry it forever. Stopping it early before v1 is our priority.\nok :( are the REST apis going through any kind of design review or code review?\nv1beta1 is really an alpha quality API. It grew organically and did not go through any kind of design review, as evidenced by the many issues you've filed. We want to get rid of it ASAP. We intend v1beta3 to be the \"release candidate\" for the v1 API, which will be the stable API. As for knowing what exists, you can now browse or GET . Additionally, pull is in progress, and will return a list of valid paths to GET .\nthanks. will check swagger for now", "commid": "kubernetes_issue_3670", "tokennum": 478}], "negative_passages": []}
{"query_id": "q-en-kubernetes-fe567d9cf78409720bcbb87b1ad6339cb18b53dd88041da1d575419b6cc2df05", "query": " # The Kubernetes API Primary system and API concepts are documented in the [User guide](user-guide.md). Overall API conventions are described in the [API conventions doc](api-conventions.md). Complete API details are documented via [Swagger](http://swagger.io/). The Kubernetes apiserver (aka \"master\") exports an API that can be used to retrieve the [Swagger spec](https://github.com/swagger-api/swagger-spec/tree/master/schemas/v1.2) for the Kubernetes API, by default at `/swaggerapi`, and a UI you can use to browse the API documentation at `/swaggerui`. We also periodically update a [statically generated UI](http://kubernetes.io/third_party/swagger-ui/). Remote access to the API is discussed in the [access doc](accessing_the_api.md). The Kubernetes API also serves as the foundation for the declarative configuration schema for the system. The [Kubectl](kubectl.md) command-line tool can be used to create, update, delete, and get API objects. Kubernetes also stores its serialized state (currently in [etcd](https://coreos.com/docs/distributed-configuration/getting-started-with-etcd/)) in terms of the API resources. Kubernetes itself is decomposed into multiple components, which interact through its API. ## API changes In our experience, any system that is successful needs to grow and change as new use cases emerge or existing ones change. Therefore, we expect the Kubernetes API to continuously change and grow. However, we intend to not break compatibility with existing clients, for an extended period of time. In general, new API resources and new resource fields can be expected to be added frequently. Elimination of resources or fields will require following a deprecation process. The precise deprecation policy for eliminating features is TBD, but once we reach our 1.0 milestone, there will be a specific policy. What constitutes a compatible change and how to change the API are detailed by the [API change document](devel/api_changes.md). ## API versioning Fine-grain resource evolution alone makes it difficult to eliminate fields or restructure resource representations. Therefore, Kubernetes supports multiple API versions, each at a different API path prefix, such as `/api/v1beta3`. These are simply different interfaces to read and/or modify the same underlying resources. In general, all API resources are accessible via all API versions, though there may be some cases in the future where that is not true. Distinct API versions present more clear, consistent views of system resources and behavior than intermingled, independently evolved resources. They also provide a more straightforward mechanism for controlling access to end-of-lifed and/or experimental APIs. The [API and release versioning proposal](versioning.md) describes the current thinking on the API version evolution process. ## v1beta1 and v1beta2 are deprecated; please move to v1beta3 ASAP As of April 1, 2015, the Kubernetes v1beta3 API has been enabled by default, and the v1beta1 and v1beta2 APIs are deprecated. v1beta3 should be considered the v1 release-candidate API, and the v1 API is expected to be substantially similar. As \"pre-release\" APIs, v1beta1, v1beta2, and v1beta3 will be eliminated once the v1 API is available, by the end of June 2015. ## v1beta3 conversion tips We're working to convert all documentation and examples to v1beta3. Most examples already contain a v1beta3 subdirectory with the API objects translated to v1beta3. A simple [API conversion tool](cluster_management.md#switching-your-config-files-to-a-new-api-version) has been written to simplify the translation process. Use `kubectl create --validate` in order to validate your json or yaml against our Swagger spec. Some important differences between v1beta1/2 and v1beta3: * The resource `id` is now called `name`. * `name`, `labels`, `annotations`, and other metadata are now nested in a map called `metadata` * `desiredState` is now called `spec`, and `currentState` is now called `status` * `/minions` has been moved to `/nodes`, and the resource has kind `Node` * The namespace is required (for all namespaced resources) and has moved from a URL parameter to the path: `/api/v1beta3/namespaces/{namespace}/{resource_collection}/{resource_name}` * The names of all resource collections are now lower cased - instead of `replicationControllers`, use `replicationcontrollers`. * To watch for changes to a resource, open an HTTP or Websocket connection to the collection URL and provide the `?watch=true` URL parameter along with the desired `resourceVersion` parameter to watch from. * The container `entrypoint` has been renamed to `command`, and `command` has been renamed to `args`. * Container, volume, and node resources are expressed as nested maps (e.g., `resources{cpu:1}`) rather than as individual fields, and resource values support [scaling suffixes](resources.md#resource-quantities) rather than fixed scales (e.g., milli-cores). * Restart policy is represented simply as a string (e.g., \"Always\") rather than as a nested map (\"always{}\"). * The volume `source` is inlined into `volume` rather than nested. ", "positive_passages": [{"docid": "doc-en-kubernetes-22f9249afaa8c5861790bc63eab4a28e4fa1ff4ac55253dc70acafd4d0034e9c", "text": "Already we're getting bugs from people who are missing some of the more subtle conversions.\nPoint them at the conversion tool: cmd/kube-version-\nPriority? Team? Milestone?\nDone:", "commid": "kubernetes_issue_6272", "tokennum": 41}], "negative_passages": []}
{"query_id": "q-en-kubernetes-fe615ecb605ea9cd4d39dd3f2bc4e9030d18bdc2bf65717f3edfb2f949cf2290", "query": "}, } resolvConf, cleanup := getResolvConf(t) defer cleanup() resolvConfContent := []byte(fmt.Sprintf(\"nameserver %snsearch %sn\", testHostNameserver, testHostDomain)) tmpfile, err := os.CreateTemp(\"\", \"tmpResolvConf\") if err != nil { t.Fatal(err) } defer os.Remove(tmpfile.Name()) if _, err := tmpfile.Write(resolvConfContent); err != nil { t.Fatal(err) } if err := tmpfile.Close(); err != nil { t.Fatal(err) } configurer := NewConfigurer(recorder, nodeRef, nil, []net.IP{netutils.ParseIPSloppy(testClusterNameserver)}, testClusterDNSDomain, resolvConf) configurer.getHostDNSConfig = fakeGetHostDNSConfigCustom configurer := NewConfigurer(recorder, nodeRef, nil, []net.IP{netutils.ParseIPSloppy(testClusterNameserver)}, testClusterDNSDomain, tmpfile.Name()) testCases := []struct { desc string", "positive_passages": [{"docid": "doc-en-kubernetes-5e93ba1a338583c6193506912a68a67fcc908451a37c483694071bc120df2bcb", "text": "When a --resolv-conf (DNS resolver config) is passed to kubelet on a Windows node, the config content is used as part of sandbox initialization (through getHostDNSConfig()). Now with the recent changes (), it seems now to fail with an error like: E0315 10:22:16. 4764 ] \"Error syncing pod, skipping\" err=\"failed to \"CreatePodSandbox\" for \"img-pull-(-b40a-459d-9033-)\" with CreatePodSandboxError: \"Failed to generate sandbox config for pod \"img-pull-(-b40a-459d-9033-)\": Unexpected resolver config value: \"C:etckubernetescni\". Expected \"\" or \"Host\".\"\" pod=\"img-puller-3842/img-pull-\" podUID=-b40a-459d-9033- This is showing on all the runs of GCE testgrid (latest): Reading/using the passed resolv config on Windows Run a Windows pool with DNS resolv config being set as part of kubelet params. No response_ 27 alpha.3 1.27 beta.0 master resolvConf, cleanup := getResolvConf(t) defer cleanup() resolvConfContent := []byte(fmt.Sprintf(\"nameserver %snsearch %sn\", testHostNameserver, testHostDomain)) tmpfile, err := os.CreateTemp(\"\", \"tmpResolvConf\") if err != nil { t.Fatal(err) } defer os.Remove(tmpfile.Name()) if _, err := tmpfile.Write(resolvConfContent); err != nil { t.Fatal(err) } if err := tmpfile.Close(); err != nil { t.Fatal(err) } configurer := NewConfigurer(recorder, nodeRef, nil, []net.IP{netutils.ParseIPSloppy(testClusterNameserver)}, testClusterDNSDomain, resolvConf) configurer.getHostDNSConfig = fakeGetHostDNSConfigCustom configurer := NewConfigurer(recorder, nodeRef, nil, []net.IP{netutils.ParseIPSloppy(testClusterNameserver)}, testClusterDNSDomain, tmpfile.Name()) testCases := []struct { desc string", "positive_passages": [{"docid": "doc-en-kubernetes-a55794e1d2c420cf7484c297920657922704eb45d045ae8237364febdd60d634", "text": "Yeah, what used on GCE/GKE, seems to be a Linux-like conf, eg: nameserver 10.128.0.1 search c. { name: \"NodeAddresses should report error if VMSS instanceID is invalid\", nodeName: \"vm123456\", metadataName: \"vmss_$123\", providerID: \"azure:///subscriptions/subscription/resourceGroups/rg/providers/Microsoft.Compute/virtualMachines/vm1\", vmType: vmTypeVMSS, expectedErrMsg: fmt.Errorf(\"failed to parse VMSS instanceID %q: strconv.ParseInt: parsing %q: invalid syntax\", \"$123\", \"$123\"), }, } for _, test := range testcases {", "positive_passages": [{"docid": "doc-en-kubernetes-93c0a615d6725c12b66abb2dead01e17ca700d451119b1467d3edce68f73a013", "text": "