{"query_id": "q-en-kubernetes-8e3c1b5434af24681841213f17bfec0a8ba7d713665fc9b8d51a3bd3e2bb8fcb", "query": "A temporary workaround is to seed our random number generators. But really, apiserver should assign a guaranteed-unique identifier upon resource creation.\nInternally we use RFC4122 UUIDs for identifying pods. Any objections to making this part of the pod setup? I guess it would really be a string (like \"id\") but with the strong suggestion that it be an encoded UUID. Or we could use docker-style 256 bit randoms, but that might get confusing. If we further lock down container names to RFC1035 labels, we can use # Identifiers and Names in Kubernetes A summarization of the goals and recommendations for identifiers and names in Kubernetes. Described in [GitHub issue #199](https://github.com/GoogleCloudPlatform/kubernetes/issues/199). ## Definitions identifier : an opaque machine generated value guaranteed to be unique in a certain space name : a human readable string intended to help an end user distinguish between similar but distinct entities [rfc1035](http://www.ietf.org/rfc/rfc1035.txt)/[rfc1123](http://www.ietf.org/rfc/rfc1123.txt) label (DNS_LABEL) : An alphanumeric (a-z, A-Z, and 0-9) string less than 64 characters, with the '-' character allowed anywhere except the first or last character, suitable for use as a hostname or segment in a domain name. [rfc1035](http://www.ietf.org/rfc/rfc1035.txt)/[rfc1123](http://www.ietf.org/rfc/rfc1123.txt) subdomain (DNS_SUBDOMAIN) : One or more rfc1035/rfc1123 labels separated by '.' with a maximum length of 255 characters namespace string (NAMESPACE) : An rfc1035/rfc1123 subdomain no longer than 191 characters (255-63-1) source namespace string : The namespace string of a source of pod definitions on a host [rfc4122](http://www.ietf.org/rfc/rfc4122.txt) universally unique identifier (UUID) : A 128 bit generated value that is extremely unilkely to collide across time and space and requires no central coordination pod unique name : the combination of a pod's source namespace string and name string on a host pod unique identifier : the identifier associated with a single execution of a pod on a host, which changes on each restart. Must be a UUID ## Objectives for names and identifiers 1) Uniquely identify an instance of a pod on the apiserver and on the kubelet 2) Uniquely identify an instance of a container within a pod on the apiserver and on the kubelet 3) Uniquely identify a single execution of a container in time for logging or reporting 4) The structure of a pod specification should stay largely the same throughout the entire system 5) Provide human-friendly, memorable, semantically meaningful, short-ish references in container and pod operations 6) Provide predictable container and pod references in operations and/or configuration files 7) Allow idempotent creation of API resources (#148) 8) Allow DNS names to be automatically generated for individual containers or pods (#146) ## Implications 1) Each container name within a container manifest must be unique within that manifest 2) Each pod instance on the apiserver must have a unique identifier across space and time (UUID) 1) The apiserver may set this identifier if not specified by a client 2) This identifier will persist even if moved across hosts 3) Each pod instance on the apiserver must have a name string which is human-friendly, dns-friendly (DNS_LABEL), and unique in the apiserver space 1) The apiserver may set this name string if not specified by a client 4) Each apiserver must have a configured namespace string (NAMESPACE) that is unique across all apiservers that share its configured minions 5) Each source of pod configuration to a kubelet must have a source namespace string (NAMESPACE) that is unique across all sources available to that kubelet 6) All pod instances on a host must have a name string which is human-friendly, dns-friendly, and unique per namespace string (DNS_LABEL) 7) The combination of the name string and source namespace string on a kubelet must be unique and is referred to as the pod unique name 8) When starting an instance of a pod on a kubelet the first time, a new pod unique identifier (UUID) should be assigned to that pod instance 1) If that pod is restarted, it must retain the pod unique identifier it previously had 2) If the pod is stopped and a new instance with the same pod unique name is started, it must be assigned a new pod unique identifier 9) The kubelet should use the pod unique name and pod unique identifier to produce a Docker container name (--name) ", "commid": "kubernetes_pr_334"}], "negative_passages": []} {"query_id": "q-en-kubernetes-fa1ebef63f76d15ef1c28364f7447772418ff92e79eefd59fa95d78576bc8628", "query": "If you are using a private docker registry, you first need to call something like and provide a user/pass/email. Any further calls to will pick up your identity from a file in $HOME/.dockercfg. Before a kubelet starts a container, it calls the equivalent of . Unfortunately, the auth information is not picked up from $HOME/.dockercfg, and there is no alternate way to configure it. The kubelet needs to support docker registry auth.\nHere's where the auth should be getting passed to the docker client:\nI'm not certain, but it looks like authenticated docker pulls would have worked before this patch:\nSee .\nah, search skills failed me again", "positive_passages": [{"docid": "doc-en-kubernetes-d0b2a1694e137a25f0c9d773183746b4cbbfdc14bd5ba535205288f1c6c3991e", "text": " # Maintainers Eric Paris ", "commid": "kubernetes_pr_1647"}], "negative_passages": []} {"query_id": "q-en-kubernetes-fa1ebef63f76d15ef1c28364f7447772418ff92e79eefd59fa95d78576bc8628", "query": "If you are using a private docker registry, you first need to call something like and provide a user/pass/email. Any further calls to will pick up your identity from a file in $HOME/.dockercfg. Before a kubelet starts a container, it calls the equivalent of . Unfortunately, the auth information is not picked up from $HOME/.dockercfg, and there is no alternate way to configure it. The kubelet needs to support docker registry auth.\nHere's where the auth should be getting passed to the docker client:\nI'm not certain, but it looks like authenticated docker pulls would have worked before this patch:\nSee .\nah, search skills failed me again", "positive_passages": [{"docid": "doc-en-kubernetes-a70238982f99fadc61e7ce926d51d6c8b6faa8bfc5da0332d4165249799ad153", "text": " #!bash # # bash completion file for core kubecfg commands # # This script provides completion of non replication controller options # # To enable the completions either: # - place this file in /etc/bash_completion.d # or # - copy this file and add the line below to your .bashrc after # bash completion features are loaded # . kubecfg # # Note: # Currently, the completions will not work if the apiserver daemon is not # running on localhost on the standard port 8080 __contains_word () { local w word=$1; shift for w in \"$@\"; do [[ $w = \"$word\" ]] && return done return 1 } # This should be provided by the bash-completions, but give a really simple # stoopid version just in case. It works most of the time. if ! declare -F _get_comp_words_by_ref >/dev/null 2>&1; then _get_comp_words_by_ref () { while [ $# -gt 0 ]; do case \"$1\" in cur) cur=${COMP_WORDS[COMP_CWORD]} ;; prev) prev=${COMP_WORDS[COMP_CWORD-1]} ;; words) words=(\"${COMP_WORDS[@]}\") ;; cword) cword=$COMP_CWORD ;; -n) shift # we don't handle excludes ;; esac shift done } fi __has_service() { local i for ((i=0; i < cword; i++)); do local word=${words[i]} # strip everything after a / so things like pods/[id] match word=${word%%/*} if __contains_word \"${word}\" \"${services[@]}\" && ! __contains_word \"${words[i-1]}\" \"${opts[@]}\"; then return 0 fi done return 1 } # call kubecfg list $1, # exclude blank lines # skip the header stuff kubecfg prints on the first 2 lines # append $1/ to the first column and use that in compgen __kubecfg_parse_list() { local kubecfg_output if kubecfg_output=$(kubecfg list \"$1\" 2>/dev/null); then out=($(echo \"${kubecfg_output}\" | awk -v prefix=\"$1\" '/^$/ {next} NR > 2 {print prefix\"/\"$1}')) COMPREPLY=( $( compgen -W \"${out[*]}\" -- \"$cur\" ) ) fi } _kubecfg_specific_service_match() { case \"$cur\" in pods/*) __kubecfg_parse_list pods ;; minions/*) __kubecfg_parse_list minions ;; replicationControllers/*) __kubecfg_parse_list replicationControllers ;; services/*) __kubecfg_parse_list services ;; *) if __has_service; then return 0 fi compopt -o nospace COMPREPLY=( $( compgen -S / -W \"${services[*]}\" -- \"$cur\" ) ) ;; esac } _kubecfg_service_match() { if __has_service; then return 0 fi COMPREPLY=( $( compgen -W \"${services[*]}\" -- \"$cur\" ) ) } _kubecfg() { local opts=( -h -c ) local create_services=(pods replicationControllers services) local update_services=(replicationControllers) local all_services=(pods replicationControllers services minions) local services=(\"${all_services[@]}\") local json_commands=(create update) local all_commands=(create update get list delete stop rm rollingupdate resize) local commands=(\"${all_commands[@]}\") COMPREPLY=() local command local cur prev words cword _get_comp_words_by_ref -n : cur prev words cword if __contains_word \"$prev\" \"${opts[@]}\"; then case $prev in -c) _filedir '@(json|yml|yaml)' return 0 ;; -h) return 0 ;; esac fi if [[ \"$cur\" = -* ]]; then COMPREPLY=( $(compgen -W \"${opts[*]}\" -- \"$cur\") ) return 0 fi # if you passed -c, you are limited to create or update if __contains_word \"-c\" \"${words[@]}\"; then services=(\"${create_services[@]}\" \"${update_services[@]}\") commands=(\"${json_commands[@]}\") fi # figure out which command they are running, remembering that arguments to # options don't count as the command! So a hostname named 'create' won't # trip things up local i for ((i=0; i < cword; i++)); do if __contains_word \"${words[i]}\" \"${commands[@]}\" && ! __contains_word \"${words[i-1]}\" \"${opts[@]}\"; then command=${words[i]} break fi done # tell the list of possible commands if [[ -z ${command} ]]; then COMPREPLY=( $( compgen -W \"${commands[*]}\" -- \"$cur\" ) ) return 0 fi # remove services which you can't update given your command if [[ ${command} == \"create\" ]]; then services=(\"${create_services[@]}\") elif [[ ${command} == \"update\" ]]; then services=(\"${update_services[@]}\") fi case $command in create | list) _kubecfg_service_match ;; update | get | delete) _kubecfg_specific_service_match ;; *) ;; esac return 0 } complete -F _kubecfg kubecfg # ex: ts=4 sw=4 et filetype=sh ", "commid": "kubernetes_pr_1647"}], "negative_passages": []} {"query_id": "q-en-kubernetes-fa1ebef63f76d15ef1c28364f7447772418ff92e79eefd59fa95d78576bc8628", "query": "If you are using a private docker registry, you first need to call something like and provide a user/pass/email. Any further calls to will pick up your identity from a file in $HOME/.dockercfg. Before a kubelet starts a container, it calls the equivalent of . Unfortunately, the auth information is not picked up from $HOME/.dockercfg, and there is no alternate way to configure it. The kubelet needs to support docker registry auth.\nHere's where the auth should be getting passed to the docker client:\nI'm not certain, but it looks like authenticated docker pulls would have worked before this patch:\nSee .\nah, search skills failed me again", "positive_passages": [{"docid": "doc-en-kubernetes-3b11a81468f6a5ac23a5c5152bea0f7aaadbb57189a6cd5f4abeb0c3a33cdcd4", "text": " #!bash # # bash completion file for core kubecfg commands # # This script provides completion of non replication controller options # # To enable the completions either: # - place this file in /etc/bash_completion.d # or # - copy this file and add the line below to your .bashrc after # bash completion features are loaded # . kubecfg # # Note: # Currently, the completions will not work if the apiserver daemon is not # running on localhost on the standard port 8080 __contains_word () { local w word=$1; shift for w in \"$@\"; do [[ $w = \"$word\" ]] && return done return 1 } # This should be provided by the bash-completions, but give a really simple # stoopid version just in case. It works most of the time. if ! declare -F _get_comp_words_by_ref >/dev/null 2>&1; then _get_comp_words_by_ref () { while [ $# -gt 0 ]; do case \"$1\" in cur) cur=${COMP_WORDS[COMP_CWORD]} ;; prev) prev=${COMP_WORDS[COMP_CWORD-1]} ;; words) words=(\"${COMP_WORDS[@]}\") ;; cword) cword=$COMP_CWORD ;; -n) shift # we don't handle excludes ;; esac shift done } fi __has_service() { local i for ((i=0; i < cword; i++)); do local word=${words[i]} # strip everything after a / so things like pods/[id] match word=${word%%/*} if __contains_word \"${word}\" \"${services[@]}\" && ! __contains_word \"${words[i-1]}\" \"${opts[@]}\"; then return 0 fi done return 1 } # call kubecfg list $1, # exclude blank lines # skip the header stuff kubecfg prints on the first 2 lines # append $1/ to the first column and use that in compgen __kubecfg_parse_list() { local kubecfg_output if kubecfg_output=$(kubecfg list \"$1\" 2>/dev/null); then out=($(echo \"${kubecfg_output}\" | awk -v prefix=\"$1\" '/^$/ {next} NR > 2 {print prefix\"/\"$1}')) COMPREPLY=( $( compgen -W \"${out[*]}\" -- \"$cur\" ) ) fi } _kubecfg_specific_service_match() { case \"$cur\" in pods/*) __kubecfg_parse_list pods ;; minions/*) __kubecfg_parse_list minions ;; replicationControllers/*) __kubecfg_parse_list replicationControllers ;; services/*) __kubecfg_parse_list services ;; *) if __has_service; then return 0 fi compopt -o nospace COMPREPLY=( $( compgen -S / -W \"${services[*]}\" -- \"$cur\" ) ) ;; esac } _kubecfg_service_match() { if __has_service; then return 0 fi COMPREPLY=( $( compgen -W \"${services[*]}\" -- \"$cur\" ) ) } _kubecfg() { local opts=( -h -c ) local create_services=(pods replicationControllers services) local update_services=(replicationControllers) local all_services=(pods replicationControllers services minions) local services=(\"${all_services[@]}\") local json_commands=(create update) local all_commands=(create update get list delete stop rm rollingupdate resize) local commands=(\"${all_commands[@]}\") COMPREPLY=() local command local cur prev words cword _get_comp_words_by_ref -n : cur prev words cword if __contains_word \"$prev\" \"${opts[@]}\"; then case $prev in -c) _filedir '@(json|yml|yaml)' return 0 ;; -h) return 0 ;; esac fi if [[ \"$cur\" = -* ]]; then COMPREPLY=( $(compgen -W \"${opts[*]}\" -- \"$cur\") ) return 0 fi # if you passed -c, you are limited to create or update if __contains_word \"-c\" \"${words[@]}\"; then services=(\"${create_services[@]}\" \"${update_services[@]}\") commands=(\"${json_commands[@]}\") fi # figure out which command they are running, remembering that arguments to # options don't count as the command! So a hostname named 'create' won't # trip things up local i for ((i=0; i < cword; i++)); do if __contains_word \"${words[i]}\" \"${commands[@]}\" && ! __contains_word \"${words[i-1]}\" \"${opts[@]}\"; then command=${words[i]} break fi done # tell the list of possible commands if [[ -z ${command} ]]; then COMPREPLY=( $( compgen -W \"${commands[*]}\" -- \"$cur\" ) ) return 0 fi # remove services which you can't update given your command if [[ ${command} == \"create\" ]]; then services=(\"${create_services[@]}\") elif [[ ${command} == \"update\" ]]; then services=(\"${update_services[@]}\") fi case $command in create | list) _kubecfg_service_match ;; update | get | delete) _kubecfg_specific_service_match ;; *) ;; esac return 0 } complete -F _kubecfg kubecfg # ex: ts=4 sw=4 et filetype=sh ", "commid": "kubernetes_pr_1647"}], "negative_passages": []} {"query_id": "q-en-kubernetes-24df73a6db05c45f0cac4e827cf56a2dc89af2bfb599ee7ed6ea71c4b1f5c613", "query": "Should be able to tell from logs if a minion isn't present because cloud provider didn't tell us about it, or if it's failing health checks.", "positive_passages": [{"docid": "doc-en-kubernetes-745d381fa41da15f155ef8da60ae55fcca0f06dadec83246b90b044f9bd74fa6", "text": "} if status == health.Healthy { result = append(result, minion) } else { glog.Errorf(\"%s failed a health check, ignoring.\", minion) } } return result, nil", "commid": "kubernetes_pr_1019"}], "negative_passages": []} {"query_id": "q-en-kubernetes-823174cd7a80c2aabbd968d214ca3d07426d76d7cc3a412fb0316f4befd8fbf6", "query": "https://travis- Seems surprising-- I will look into it later if no one else does first.\nGot another flake. Looking into it.", "positive_passages": [{"docid": "doc-en-kubernetes-fc8264fb7df08567e2d75f8f24c25060160872ae681efacdb33ce35d071441bd", "text": "func TestErrorsToAPIStatus(t *testing.T) { cases := map[error]api.Status{ NewAlreadyExistsErr(\"foo\", \"bar\"): api.Status{ NewAlreadyExistsErr(\"foo\", \"bar\"): { Status: api.StatusFailure, Code: http.StatusConflict, Reason: \"already_exists\",", "commid": "kubernetes_pr_858"}], "negative_passages": []} {"query_id": "q-en-kubernetes-823174cd7a80c2aabbd968d214ca3d07426d76d7cc3a412fb0316f4befd8fbf6", "query": "https://travis- Seems surprising-- I will look into it later if no one else does first.\nGot another flake. Looking into it.", "positive_passages": [{"docid": "doc-en-kubernetes-e0b3f5c2bdc6e8c560cf17ee8d76c4cb251d10eff3595b5e6d934d97d3fecc66", "text": "ID: \"bar\", }, }, NewConflictErr(\"foo\", \"bar\", errors.New(\"failure\")): api.Status{ NewConflictErr(\"foo\", \"bar\", errors.New(\"failure\")): { Status: api.StatusFailure, Code: http.StatusConflict, Reason: \"conflict\",", "commid": "kubernetes_pr_858"}], "negative_passages": []} {"query_id": "q-en-kubernetes-823174cd7a80c2aabbd968d214ca3d07426d76d7cc3a412fb0316f4befd8fbf6", "query": "https://travis- Seems surprising-- I will look into it later if no one else does first.\nGot another flake. Looking into it.", "positive_passages": [{"docid": "doc-en-kubernetes-98ebd6b80d6b9f12ac1b115a06371f14a92cdb96c50946e24109e61012d9efca", "text": "} func TestSyncCreateTimeout(t *testing.T) { testOver := make(chan struct{}) defer close(testOver) storage := SimpleRESTStorage{ injectedFunction: func(obj interface{}) (interface{}, error) { time.Sleep(5 * time.Millisecond) // Eliminate flakes by ensuring the create operation takes longer than this test. <-testOver return obj, nil }, }", "commid": "kubernetes_pr_858"}], "negative_passages": []} {"query_id": "q-en-kubernetes-823174cd7a80c2aabbd968d214ca3d07426d76d7cc3a412fb0316f4befd8fbf6", "query": "https://travis- Seems surprising-- I will look into it later if no one else does first.\nGot another flake. Looking into it.", "positive_passages": [{"docid": "doc-en-kubernetes-69e82608088ae166fb63f35140cfbc8c4e5e7037b4dcc6b261029d67a4c00790", "text": "} func TestOpGet(t *testing.T) { simpleStorage := &SimpleRESTStorage{} testOver := make(chan struct{}) defer close(testOver) simpleStorage := &SimpleRESTStorage{ injectedFunction: func(obj interface{}) (interface{}, error) { // Eliminate flakes by ensuring the create operation takes longer than this test. <-testOver return obj, nil }, } handler := New(map[string]RESTStorage{ \"foo\": simpleStorage, }, codec, \"/prefix/version\")", "commid": "kubernetes_pr_858"}], "negative_passages": []} {"query_id": "q-en-kubernetes-41a4a9d51023611803efc4b60b5824c1d1fbca7fe22577f19d7270fd74d5ef0d", "query": "if you delete a service and restart it the port is no longer accessible To reproduce: 1) start a pod 2) start a service All works fine port is assigned - traffic flows 3) delete service 4) start service the port is now not assigned Here are sample configs: pod: Service:\nI thought this was a known issue, but I could not find another issue on the same topic. Note that all containers started after the first instance of the service was created will have environment variables referring to that instance and won't see environment variables referring to the new instance until they restart. This is a fundamental shortcoming of the current approach.\nI understand what you just replied but that is not the issue here. The problem is that when you delete a service , no other service can resuse that port successfully. The port is removed from a netstat listing. When a new service tries to use that port it gets allocated but you can not communicate to the pod through it. It is hosed until I re-install the cluster. (I have not tried rebooting the cluster) BTW this is a bug not a design issue.\nAh. The previous issue I was thinking of was . It sounds like the issue hasn't been completely resolved.\nI will take a look at this today.\nSorry for the problem, has the fix. --brendan\nThanks! On Aug 19, 2014, at 2:26 PM, brendandburns wrote:", "positive_passages": [{"docid": "doc-en-kubernetes-bee0dfa0415af80051e30433b1014e5509d82a22f43a90957446c5df8414bd2b", "text": "for _, service := range services { activeServices.Insert(service.ID) info, exists := proxier.getServiceInfo(service.ID) if exists && info.port == service.Port { if exists && info.active && info.port == service.Port { continue } if exists { if exists && info.port != service.Port { proxier.StopProxy(service.ID) } glog.Infof(\"Adding a new service %s on port %d\", service.ID, service.Port)", "commid": "kubernetes_pr_956"}], "negative_passages": []} {"query_id": "q-en-kubernetes-41a4a9d51023611803efc4b60b5824c1d1fbca7fe22577f19d7270fd74d5ef0d", "query": "if you delete a service and restart it the port is no longer accessible To reproduce: 1) start a pod 2) start a service All works fine port is assigned - traffic flows 3) delete service 4) start service the port is now not assigned Here are sample configs: pod: Service:\nI thought this was a known issue, but I could not find another issue on the same topic. Note that all containers started after the first instance of the service was created will have environment variables referring to that instance and won't see environment variables referring to the new instance until they restart. This is a fundamental shortcoming of the current approach.\nI understand what you just replied but that is not the issue here. The problem is that when you delete a service , no other service can resuse that port successfully. The port is removed from a netstat listing. When a new service tries to use that port it gets allocated but you can not communicate to the pod through it. It is hosed until I re-install the cluster. (I have not tried rebooting the cluster) BTW this is a bug not a design issue.\nAh. The previous issue I was thinking of was . It sounds like the issue hasn't been completely resolved.\nI will take a look at this today.\nSorry for the problem, has the fix. --brendan\nThanks! On Aug 19, 2014, at 2:26 PM, brendandburns wrote:", "positive_passages": [{"docid": "doc-en-kubernetes-03513259d836d47077e1b4fe013229e0eeeb501e38350b0eccfa472ed8d62a98", "text": "} } func TestProxyUpdateDeleteUpdate(t *testing.T) { lb := NewLoadBalancerRR() lb.OnUpdate([]api.Endpoints{{JSONBase: api.JSONBase{ID: \"echo\"}, Endpoints: []string{net.JoinHostPort(\"127.0.0.1\", port)}}}) p := NewProxier(lb) proxyPort, err := p.addServiceOnUnusedPort(\"echo\") if err != nil { t.Fatalf(\"error adding new service: %#v\", err) } conn, err := net.Dial(\"tcp\", net.JoinHostPort(\"127.0.0.1\", proxyPort)) if err != nil { t.Fatalf(\"error connecting to proxy: %v\", err) } conn.Close() p.OnUpdate([]api.Service{}) if err := waitForClosedPort(p, proxyPort); err != nil { t.Fatalf(err.Error()) } proxyPortNum, _ := strconv.Atoi(proxyPort) p.OnUpdate([]api.Service{ {JSONBase: api.JSONBase{ID: \"echo\"}, Port: proxyPortNum}, }) testEchoConnection(t, \"127.0.0.1\", proxyPort) } func TestProxyUpdatePort(t *testing.T) { lb := NewLoadBalancerRR() lb.OnUpdate([]api.Endpoints{{JSONBase: api.JSONBase{ID: \"echo\"}, Endpoints: []string{net.JoinHostPort(\"127.0.0.1\", port)}}})", "commid": "kubernetes_pr_956"}], "negative_passages": []} {"query_id": "q-en-kubernetes-41a4a9d51023611803efc4b60b5824c1d1fbca7fe22577f19d7270fd74d5ef0d", "query": "if you delete a service and restart it the port is no longer accessible To reproduce: 1) start a pod 2) start a service All works fine port is assigned - traffic flows 3) delete service 4) start service the port is now not assigned Here are sample configs: pod: Service:\nI thought this was a known issue, but I could not find another issue on the same topic. Note that all containers started after the first instance of the service was created will have environment variables referring to that instance and won't see environment variables referring to the new instance until they restart. This is a fundamental shortcoming of the current approach.\nI understand what you just replied but that is not the issue here. The problem is that when you delete a service , no other service can resuse that port successfully. The port is removed from a netstat listing. When a new service tries to use that port it gets allocated but you can not communicate to the pod through it. It is hosed until I re-install the cluster. (I have not tried rebooting the cluster) BTW this is a bug not a design issue.\nAh. The previous issue I was thinking of was . It sounds like the issue hasn't been completely resolved.\nI will take a look at this today.\nSorry for the problem, has the fix. --brendan\nThanks! On Aug 19, 2014, at 2:26 PM, brendandburns wrote:", "positive_passages": [{"docid": "doc-en-kubernetes-5f6b46d36e3704a92aa12d61d2641b75938a38d1e9db1833bbd7f60535ed958d", "text": "} testEchoConnection(t, \"127.0.0.1\", newPort) } func TestProxyUpdatePortLetsGoOfOldPort(t *testing.T) { lb := NewLoadBalancerRR() lb.OnUpdate([]api.Endpoints{{JSONBase: api.JSONBase{ID: \"echo\"}, Endpoints: []string{net.JoinHostPort(\"127.0.0.1\", port)}}}) p := NewProxier(lb) proxyPort, err := p.addServiceOnUnusedPort(\"echo\") if err != nil { t.Fatalf(\"error adding new service: %#v\", err) } // add a new dummy listener in order to get a port that is free l, _ := net.Listen(\"tcp\", \":0\") _, newPort, _ := net.SplitHostPort(l.Addr().String()) portNum, _ := strconv.Atoi(newPort) l.Close() // Wait for the socket to actually get free. if err := waitForClosedPort(p, newPort); err != nil { t.Fatalf(err.Error()) } if proxyPort == newPort { t.Errorf(\"expected difference, got %s %s\", newPort, proxyPort) } p.OnUpdate([]api.Service{ {JSONBase: api.JSONBase{ID: \"echo\"}, Port: portNum}, }) if err := waitForClosedPort(p, proxyPort); err != nil { t.Fatalf(err.Error()) } testEchoConnection(t, \"127.0.0.1\", newPort) proxyPortNum, _ := strconv.Atoi(proxyPort) p.OnUpdate([]api.Service{ {JSONBase: api.JSONBase{ID: \"echo\"}, Port: proxyPortNum}, }) if err := waitForClosedPort(p, newPort); err != nil { t.Fatalf(err.Error()) } testEchoConnection(t, \"127.0.0.1\", proxyPort) } ", "commid": "kubernetes_pr_956"}], "negative_passages": []} {"query_id": "q-en-kubernetes-99e02bbd6dfd36028985d05c078261611bf34deff196750e9774a0cbdb8c8a20", "query": "We have an e2e test that runs the guestbook example. We should also have one that runs through the other example.", "positive_passages": [{"docid": "doc-en-kubernetes-a5df071292da4751cd50d7569ef1932600ef8908f38a48124af25581a2f4dd3c", "text": "trap \"teardown\" EXIT POD_ID_LIST=$($CLOUDCFG -json -l name=myNginx list pods | jq \".items[].id\") POD_ID_LIST=$($CLOUDCFG '-template={{range.Items}}{{.ID}} {{end}}' -l name=myNginx list pods) # Container turn up on a clean cluster can take a while for the docker image pull. ALL_RUNNING=0 while [ $ALL_RUNNING -ne 1 ]; do", "commid": "kubernetes_pr_965"}], "negative_passages": []} {"query_id": "q-en-kubernetes-99e02bbd6dfd36028985d05c078261611bf34deff196750e9774a0cbdb8c8a20", "query": "We have an e2e test that runs the guestbook example. We should also have one that runs through the other example.", "positive_passages": [{"docid": "doc-en-kubernetes-45f4ec9c4c966259384749e6d930a90f2d36f4eda8e3a67079db6cfe942c3555", "text": "sleep 5 ALL_RUNNING=1 for id in $POD_ID_LIST; do CURRENT_STATUS=$(remove-quotes $($CLOUDCFG -json get \"pods/$(remove-quotes ${id})\" | jq '.currentState.info[\"mynginx\"].State.Running and .currentState.info[\"net\"].State.Running')) CURRENT_STATUS=$($CLOUDCFG -template '{{and .CurrentState.Info.mynginx.State.Running .CurrentState.Info.net.State.Running}}' get pods/$id) if [ \"$CURRENT_STATUS\" != \"true\" ]; then ALL_RUNNING=0 fi", "commid": "kubernetes_pr_965"}], "negative_passages": []} {"query_id": "q-en-kubernetes-99e02bbd6dfd36028985d05c078261611bf34deff196750e9774a0cbdb8c8a20", "query": "We have an e2e test that runs the guestbook example. We should also have one that runs through the other example.", "positive_passages": [{"docid": "doc-en-kubernetes-6e2bda87e8c7f1ac4e6bab4e661bd8aa237e5e1a8fe59dde0b7c6d74659b0c34", "text": "sleep 5 POD_LIST_1=$($CLOUDCFG -json list pods | jq \".items[].id\") POD_LIST_1=$($CLOUDCFG '-template={{range.Items}}{{.ID}} {{end}}' list pods) echo \"Pods running: ${POD_LIST_1}\" $CLOUDCFG stop redisSlaveController", "commid": "kubernetes_pr_965"}], "negative_passages": []} {"query_id": "q-en-kubernetes-99e02bbd6dfd36028985d05c078261611bf34deff196750e9774a0cbdb8c8a20", "query": "We have an e2e test that runs the guestbook example. We should also have one that runs through the other example.", "positive_passages": [{"docid": "doc-en-kubernetes-648f4f93b9e8c651b3e943319bf38b385595c6af065de8d48cb05602612e8bb9", "text": "$CLOUDCFG delete services/redismaster $CLOUDCFG delete pods/redis-master-2 POD_LIST_2=$($CLOUDCFG -json list pods | jq \".items[].id\") POD_LIST_2=$($CLOUDCFG '-template={{range.Items}}{{.ID}} {{end}}' list pods) echo \"Pods running after shutdown: ${POD_LIST_2}\" exit 0", "commid": "kubernetes_pr_965"}], "negative_passages": []} {"query_id": "q-en-kubernetes-99e02bbd6dfd36028985d05c078261611bf34deff196750e9774a0cbdb8c8a20", "query": "We have an e2e test that runs the guestbook example. We should also have one that runs through the other example.", "positive_passages": [{"docid": "doc-en-kubernetes-5e8c4d1ed3c082bc29e8fd243c98005ab4f37bcc7223ebf6c8d65da27ab06e3c", "text": " #!/bin/bash # Copyright 2014 Google Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the \"License\"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an \"AS IS\" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # Launches an nginx container and verifies it can be reached. Assumes that # we're being called by hack/e2e-test.sh (we use some env vars it sets up). # Exit on error set -e source \"${KUBE_REPO_ROOT}/cluster/kube-env.sh\" source \"${KUBE_REPO_ROOT}/cluster/$KUBERNETES_PROVIDER/util.sh\" function validate() { POD_ID_LIST=$($CLOUDCFG '-template={{range.Items}}{{.ID}} {{end}}' -l name=$controller list pods) # Container turn up on a clean cluster can take a while for the docker image pull. ALL_RUNNING=0 while [ $ALL_RUNNING -ne 1 ]; do echo \"Waiting for all containers in pod to come up.\" sleep 5 ALL_RUNNING=1 for id in $POD_ID_LIST; do CURRENT_STATUS=$($CLOUDCFG -template '{{and .CurrentState.Info.datacontroller.State.Running .CurrentState.Info.net.State.Running}}' get pods/$id) if [ \"$CURRENT_STATUS\" != \"true\" ]; then ALL_RUNNING=0 fi done done ids=($POD_ID_LIST) if [ ${#ids[@]} -ne $1 ]; then echo \"Unexpected number of pods: ${#ids[@]}\" exit 1 fi } controller=dataController # Launch a container $CLOUDCFG -p 8080:80 run brendanburns/data 2 $controller function teardown() { echo \"Cleaning up test artifacts\" $CLOUDCFG stop $controller $CLOUDCFG rm $controller } trap \"teardown\" EXIT validate 2 $CLOUDCFG resize $controller 1 validate 1 $CLOUDCFG resize $controller 2 validate 2 # TODO: test rolling update here, but to do so, we need to make the update blocking # $CLOUDCFG -u=20s rollingupdate $controller # # Wait for the replica controller to recreate # sleep 10 # # validate 2 exit 0 ", "commid": "kubernetes_pr_965"}], "negative_passages": []} {"query_id": "q-en-kubernetes-99e02bbd6dfd36028985d05c078261611bf34deff196750e9774a0cbdb8c8a20", "query": "We have an e2e test that runs the guestbook example. We should also have one that runs through the other example.", "positive_passages": [{"docid": "doc-en-kubernetes-e93d0912014231880624770df242c0fba29a38c39974928ffe51a12efc7d001e", "text": "LEAVE_UP=${2:-0} TEAR_DOWN=${3:-0} HAVE_JQ=$(which jq) if [[ -z ${HAVE_JQ} ]]; then echo \"Please install jq, e.g.: 'sudo apt-get install jq' or, \" echo \"'sudo yum install jq' or, \" echo \"if you're on a mac with homebrew, 'brew install jq'.\" exit 1 fi # Exit on error set -e", "commid": "kubernetes_pr_965"}], "negative_passages": []} {"query_id": "q-en-kubernetes-09fca157bd43c35013d3772434f9998a9623afc2f92a2c7f553134f7bdea5afe", "query": "Running vagrant from scratch gives the error: For vagrant cluster, the last minion cannot be detected. The /tmp/minions file only shows two minions, but after a while, using the same command, i.e. will get three minions. There is a race between kube-up and validate-, we should wait a while before panic.\nI can look to fix this.", "positive_passages": [{"docid": "doc-en-kubernetes-e4d50f5d87bfdfd1daf6fe4b2eb739aecb09d1fff0a82b55268e35ac6511abb4", "text": "for (( i=0; i <${NUM_MINIONS}; i++)) do KUBE_MINION_IP_ADDRESSES[$i]=\"${MINION_IP_BASE}$[$i+2]\" MINION_NAMES[$i]=\"${MINION_IP_BASE}$[$i+2]\" done No newline at end of file VAGRANT_MINION_NAMES[$i]=\"minion-$[$i+1]\" done ", "commid": "kubernetes_pr_1155"}], "negative_passages": []} {"query_id": "q-en-kubernetes-09fca157bd43c35013d3772434f9998a9623afc2f92a2c7f553134f7bdea5afe", "query": "Running vagrant from scratch gives the error: For vagrant cluster, the last minion cannot be detected. The /tmp/minions file only shows two minions, but after a while, using the same command, i.e. will get three minions. There is a race between kube-up and validate-, we should wait a while before panic.\nI can look to fix this.", "positive_passages": [{"docid": "doc-en-kubernetes-3790b2eb6255d6d16694cf30d96e497f273431a7167b431c4b10880786cd294e", "text": "source $(dirname ${BASH_SOURCE})/${KUBE_CONFIG_FILE-\"config-default.sh\"} function detect-master () { echo \"KUBE_MASTER_IP: $KUBE_MASTER_IP\" echo \"KUBE_MASTER: $KUBE_MASTER\" echo \"KUBE_MASTER_IP: $KUBE_MASTER_IP\" echo \"KUBE_MASTER: $KUBE_MASTER\" } # Get minion IP addresses and store in KUBE_MINION_IP_ADDRESSES[] function detect-minions { echo \"Minions already detected\" echo \"Minions already detected\" } # Verify prereqs on host machine", "commid": "kubernetes_pr_1155"}], "negative_passages": []} {"query_id": "q-en-kubernetes-09fca157bd43c35013d3772434f9998a9623afc2f92a2c7f553134f7bdea5afe", "query": "Running vagrant from scratch gives the error: For vagrant cluster, the last minion cannot be detected. The /tmp/minions file only shows two minions, but after a while, using the same command, i.e. will get three minions. There is a race between kube-up and validate-, we should wait a while before panic.\nI can look to fix this.", "positive_passages": [{"docid": "doc-en-kubernetes-cba7902427a9b9b98d6a55fc48c3ec056d7cd6ea5a0735514bdbf4e9d40b745e", "text": "} # Instantiate a kubernetes cluster function kube-up { vagrant up function kube-up { get-password vagrant up echo \"Each machine instance has been created.\" echo \" Now waiting for the Salt provisioning process to complete on each machine.\" echo \" This can take some time based on your network, disk, and cpu speed.\" echo \" It is possible for an error to occur during Salt provision of cluster and this could loop forever.\" # verify master has all required daemons echo \"Validating master\" MACHINE=\"master\" REQUIRED_DAEMON=(\"salt-master\" \"salt-minion\" \"apiserver\" \"nginx\" \"controller-manager\" \"scheduler\") VALIDATED=\"1\" until [ \"$VALIDATED\" -eq \"0\" ]; do VALIDATED=\"0\" for daemon in ${REQUIRED_DAEMON[@]}; do vagrant ssh $MACHINE -c \"which $daemon\" >/dev/null 2>&1 || { printf \".\"; VALIDATED=\"1\"; sleep 2; } done done # verify each minion has all required daemons for (( i=0; i<${#MINION_NAMES[@]}; i++)); do echo \"Validating ${VAGRANT_MINION_NAMES[$i]}\" MACHINE=${VAGRANT_MINION_NAMES[$i]} REQUIRED_DAEMON=(\"salt-minion\" \"kubelet\" \"docker\") VALIDATED=\"1\" until [ \"$VALIDATED\" -eq \"0\" ]; do VALIDATED=\"0\" for daemon in ${REQUIRED_DAEMON[@]}; do vagrant ssh $MACHINE -c \"which $daemon\" >/dev/null 2>&1 || { printf \".\"; VALIDATED=\"1\"; sleep 2; } done done done echo echo \"Waiting for each minion to be registered with cloud provider\" for (( i=0; i<${#MINION_NAMES[@]}; i++)); do COUNT=\"0\" until [ \"$COUNT\" -eq \"1\" ]; do $(dirname $0)/kubecfg.sh -template '{{range.Items}}{{.ID}}:{{end}}' list minions > /tmp/minions COUNT=$(grep -c ${MINION_NAMES[i]} /tmp/minions) || { printf \".\"; sleep 2; COUNT=\"0\"; } done done echo echo \"Kubernetes cluster created.\" echo echo \"Kubernetes cluster is running. Access the master at:\" echo echo \" https://${user}:${passwd}@${KUBE_MASTER_IP}\" } # Delete a kubernetes cluster function kube-down { vagrant destroy -f vagrant destroy -f } # Update a kubernetes cluster with latest source function kube-push { vagrant provision vagrant provision } # Execute prior to running tests to build a release if required for env function test-build-release { echo \"Vagrant provider can skip release build\" echo \"Vagrant provider can skip release build\" } # Execute prior to running tests to initialize required structure function test-setup { echo \"Vagrant test setup complete\" echo \"Vagrant test setup complete\" } # Execute after running tests to perform any required clean-up function test-teardown { echo \"Vagrant ignores tear-down\" echo \"Vagrant ignores tear-down\" } # Set the {user} and {password} environment values required to interact with provider function get-password { export user=vagrant export passwd=vagrant echo \"Using credentials: $user:$passwd\" export user=vagrant export passwd=vagrant echo \"Using credentials: $user:$passwd\" }", "commid": "kubernetes_pr_1155"}], "negative_passages": []} {"query_id": "q-en-kubernetes-b9fbe44fc01f631378bc467f73c902ae0e65f3298439f7434f4b3f0d323b9e8e", "query": "As a sysadmin When using the Vagrant cluster provided And logged in as the vagrant user to the master or minions I would like to be able to call docker commands wihout requiring the 'sudo' prefix. For Fedora =19 and RHEL=7 this can be enabled by adding the vagrant user to the docker group in /etc/groups\nThis should be trivial, but its tricker to do because of the whole salt configuration process as the docker group is only created on each kubernetes-minion once docker is installed by the salt-minion. To do this properly, we would need a salt state change for this, and new flag in to let you specify additional users that should be in the docker group.", "positive_passages": [{"docid": "doc-en-kubernetes-a2705aed7e9007718f2ef62b604c4ccd0380b585086a92b695ecbe989d1d0d66", "text": "'roles:kubernetes-pool-vsphere': - match: grain - static-routes 'roles:kubernetes-pool-vagrant': - match: grain - vagrant ", "commid": "kubernetes_pr_1282"}], "negative_passages": []} {"query_id": "q-en-kubernetes-b9fbe44fc01f631378bc467f73c902ae0e65f3298439f7434f4b3f0d323b9e8e", "query": "As a sysadmin When using the Vagrant cluster provided And logged in as the vagrant user to the master or minions I would like to be able to call docker commands wihout requiring the 'sudo' prefix. For Fedora =19 and RHEL=7 this can be enabled by adding the vagrant user to the docker group in /etc/groups\nThis should be trivial, but its tricker to do because of the whole salt configuration process as the docker group is only created on each kubernetes-minion once docker is installed by the salt-minion. To do this properly, we would need a salt state change for this, and new flag in to let you specify additional users that should be in the docker group.", "positive_passages": [{"docid": "doc-en-kubernetes-04e5bd356c557b98d8e015e8de5daab053b70313bf448a52cf22b41022efeb45", "text": " vagrant: user.present: - optional_groups: - docker - remove_groups: False - require: - pkg: docker-io ", "commid": "kubernetes_pr_1282"}], "negative_passages": []} {"query_id": "q-en-kubernetes-b9fbe44fc01f631378bc467f73c902ae0e65f3298439f7434f4b3f0d323b9e8e", "query": "As a sysadmin When using the Vagrant cluster provided And logged in as the vagrant user to the master or minions I would like to be able to call docker commands wihout requiring the 'sudo' prefix. For Fedora =19 and RHEL=7 this can be enabled by adding the vagrant user to the docker group in /etc/groups\nThis should be trivial, but its tricker to do because of the whole salt configuration process as the docker group is only created on each kubernetes-minion once docker is installed by the salt-minion. To do this properly, we would need a salt state change for this, and new flag in to let you specify additional users that should be in the docker group.", "positive_passages": [{"docid": "doc-en-kubernetes-137926659c3f142ee294c3f02865497aea6ae6f6fdf025f24431d037f7c82edd", "text": "etcd_servers: $MASTER_IP roles: - kubernetes-pool - kubernetes-pool-vagrant cbr-cidr: $MINION_IP_RANGE minion_ip: $MINION_IP EOF", "commid": "kubernetes_pr_1282"}], "negative_passages": []} {"query_id": "q-en-kubernetes-a987b5e3b3353d21d7a8120b4ee9be1106a65ade9430df500d1a08c675d1ca1a", "query": "When creating replication controller (e.g. frontend php-redis from guestbook example), scheduler gets stuck and never recovers. This is a race, I have to run several times to see the error (A script to delete controller and pods, then create new controller, leave enough time in between for k8s to react). I've created PR for an attempted fix.\nI saw this too while following the guest book and creating the php/frontend step. In my case, I had 2 minions while the replication controller configuration had configured 3 replicas, so 2 pods got scheduled -each on another minion, the third pod had empty Host (saw that when doing 'list pods'), and the 3 of them were stuck in \"waiting\" status. I deleted the controller and the pods, fixed the configuration in json file to be 2 replicas and recreated the controller and everything worked. Does your fix take care of this use case? (too many replicas, too few minions/one of the pods is not scheduled)\nThis is different from your use case. If you have more replicas than minions, then the scheduler will get stuck for sure (with the error msg probably being 'failed to find a fit for pod'). AFAICT, it's not support yet.\nIf you remove the HostPorts, it should work for you. thanks for report.\n, thanks for your replies. I understand why it isn't working, but I'd expect one of the 2 options to happen instead of the current situation: the creation of replication controller if there are less minions than replicas, AND there is a hostPort property. In other words - validation of the content during creation. least schedule all replicas that can get a minion (in my case 2 out of 3) and leave only one at \"waiting\". Is there an issue open about the use case I describe?\nshould be true today. is less possible than it seems like-- minions can come and go, and become over or under loaded in the time between creation and when the pod shows up at the scheduler. So even if we check up front, we still have to check again later, and sometimes the first check will pass but not the second.", "positive_passages": [{"docid": "doc-en-kubernetes-f3959c6de3539f23b7d34fce4582fef8eb76afd91fb0856f55c645adaf76ec9c", "text": "return map[string][]api.Pod{}, err } for _, scheduledPod := range pods { host := scheduledPod.CurrentState.Host host := scheduledPod.DesiredState.Host machineToPods[host] = append(machineToPods[host], scheduledPod) } return machineToPods, nil", "commid": "kubernetes_pr_1752"}], "negative_passages": []} {"query_id": "q-en-kubernetes-a987b5e3b3353d21d7a8120b4ee9be1106a65ade9430df500d1a08c675d1ca1a", "query": "When creating replication controller (e.g. frontend php-redis from guestbook example), scheduler gets stuck and never recovers. This is a race, I have to run several times to see the error (A script to delete controller and pods, then create new controller, leave enough time in between for k8s to react). I've created PR for an attempted fix.\nI saw this too while following the guest book and creating the php/frontend step. In my case, I had 2 minions while the replication controller configuration had configured 3 replicas, so 2 pods got scheduled -each on another minion, the third pod had empty Host (saw that when doing 'list pods'), and the 3 of them were stuck in \"waiting\" status. I deleted the controller and the pods, fixed the configuration in json file to be 2 replicas and recreated the controller and everything worked. Does your fix take care of this use case? (too many replicas, too few minions/one of the pods is not scheduled)\nThis is different from your use case. If you have more replicas than minions, then the scheduler will get stuck for sure (with the error msg probably being 'failed to find a fit for pod'). AFAICT, it's not support yet.\nIf you remove the HostPorts, it should work for you. thanks for report.\n, thanks for your replies. I understand why it isn't working, but I'd expect one of the 2 options to happen instead of the current situation: the creation of replication controller if there are less minions than replicas, AND there is a hostPort property. In other words - validation of the content during creation. least schedule all replicas that can get a minion (in my case 2 out of 3) and leave only one at \"waiting\". Is there an issue open about the use case I describe?\nshould be true today. is less possible than it seems like-- minions can come and go, and become over or under loaded in the time between creation and when the pod shows up at the scheduler. So even if we check up front, we still have to check again later, and sometimes the first check will pass but not the second.", "positive_passages": [{"docid": "doc-en-kubernetes-46b84c8226a3127ae7cfce6a2900a9480392f426f142978e647fd0c5a0aa1385", "text": "{CPU: 2000}, }, }, Host: \"machine1\", } cpuAndMemory := api.PodState{ Manifest: api.ContainerManifest{", "commid": "kubernetes_pr_1752"}], "negative_passages": []} {"query_id": "q-en-kubernetes-a987b5e3b3353d21d7a8120b4ee9be1106a65ade9430df500d1a08c675d1ca1a", "query": "When creating replication controller (e.g. frontend php-redis from guestbook example), scheduler gets stuck and never recovers. This is a race, I have to run several times to see the error (A script to delete controller and pods, then create new controller, leave enough time in between for k8s to react). I've created PR for an attempted fix.\nI saw this too while following the guest book and creating the php/frontend step. In my case, I had 2 minions while the replication controller configuration had configured 3 replicas, so 2 pods got scheduled -each on another minion, the third pod had empty Host (saw that when doing 'list pods'), and the 3 of them were stuck in \"waiting\" status. I deleted the controller and the pods, fixed the configuration in json file to be 2 replicas and recreated the controller and everything worked. Does your fix take care of this use case? (too many replicas, too few minions/one of the pods is not scheduled)\nThis is different from your use case. If you have more replicas than minions, then the scheduler will get stuck for sure (with the error msg probably being 'failed to find a fit for pod'). AFAICT, it's not support yet.\nIf you remove the HostPorts, it should work for you. thanks for report.\n, thanks for your replies. I understand why it isn't working, but I'd expect one of the 2 options to happen instead of the current situation: the creation of replication controller if there are less minions than replicas, AND there is a hostPort property. In other words - validation of the content during creation. least schedule all replicas that can get a minion (in my case 2 out of 3) and leave only one at \"waiting\". Is there an issue open about the use case I describe?\nshould be true today. is less possible than it seems like-- minions can come and go, and become over or under loaded in the time between creation and when the pod shows up at the scheduler. So even if we check up front, we still have to check again later, and sometimes the first check will pass but not the second.", "positive_passages": [{"docid": "doc-en-kubernetes-372c9d87de6ea0cb06368e2b96c94eca4d59b034eddb2c083764aabbf65ed16a", "text": "{CPU: 2000, Memory: 3000}, }, }, Host: \"machine2\", } tests := []struct { pod api.Pod", "commid": "kubernetes_pr_1752"}], "negative_passages": []} {"query_id": "q-en-kubernetes-a987b5e3b3353d21d7a8120b4ee9be1106a65ade9430df500d1a08c675d1ca1a", "query": "When creating replication controller (e.g. frontend php-redis from guestbook example), scheduler gets stuck and never recovers. This is a race, I have to run several times to see the error (A script to delete controller and pods, then create new controller, leave enough time in between for k8s to react). I've created PR for an attempted fix.\nI saw this too while following the guest book and creating the php/frontend step. In my case, I had 2 minions while the replication controller configuration had configured 3 replicas, so 2 pods got scheduled -each on another minion, the third pod had empty Host (saw that when doing 'list pods'), and the 3 of them were stuck in \"waiting\" status. I deleted the controller and the pods, fixed the configuration in json file to be 2 replicas and recreated the controller and everything worked. Does your fix take care of this use case? (too many replicas, too few minions/one of the pods is not scheduled)\nThis is different from your use case. If you have more replicas than minions, then the scheduler will get stuck for sure (with the error msg probably being 'failed to find a fit for pod'). AFAICT, it's not support yet.\nIf you remove the HostPorts, it should work for you. thanks for report.\n, thanks for your replies. I understand why it isn't working, but I'd expect one of the 2 options to happen instead of the current situation: the creation of replication controller if there are less minions than replicas, AND there is a hostPort property. In other words - validation of the content during creation. least schedule all replicas that can get a minion (in my case 2 out of 3) and leave only one at \"waiting\". Is there an issue open about the use case I describe?\nshould be true today. is less possible than it seems like-- minions can come and go, and become over or under loaded in the time between creation and when the pod shows up at the scheduler. So even if we check up front, we still have to check again later, and sometimes the first check will pass but not the second.", "positive_passages": [{"docid": "doc-en-kubernetes-34a15a11d883ec2f0419d3400e0d9e285eda5150fbd4a825490cf895a25a93c5", "text": "expectedList: []HostPriority{{\"machine1\", 0}, {\"machine2\", 0}}, test: \"no resources requested\", pods: []api.Pod{ {CurrentState: machine1State, Labels: labels2}, {CurrentState: machine1State, Labels: labels1}, {CurrentState: machine2State, Labels: labels1}, {CurrentState: machine2State, Labels: labels1}, {DesiredState: machine1State, Labels: labels2}, {DesiredState: machine1State, Labels: labels1}, {DesiredState: machine2State, Labels: labels1}, {DesiredState: machine2State, Labels: labels1}, }, }, {", "commid": "kubernetes_pr_1752"}], "negative_passages": []} {"query_id": "q-en-kubernetes-a987b5e3b3353d21d7a8120b4ee9be1106a65ade9430df500d1a08c675d1ca1a", "query": "When creating replication controller (e.g. frontend php-redis from guestbook example), scheduler gets stuck and never recovers. This is a race, I have to run several times to see the error (A script to delete controller and pods, then create new controller, leave enough time in between for k8s to react). I've created PR for an attempted fix.\nI saw this too while following the guest book and creating the php/frontend step. In my case, I had 2 minions while the replication controller configuration had configured 3 replicas, so 2 pods got scheduled -each on another minion, the third pod had empty Host (saw that when doing 'list pods'), and the 3 of them were stuck in \"waiting\" status. I deleted the controller and the pods, fixed the configuration in json file to be 2 replicas and recreated the controller and everything worked. Does your fix take care of this use case? (too many replicas, too few minions/one of the pods is not scheduled)\nThis is different from your use case. If you have more replicas than minions, then the scheduler will get stuck for sure (with the error msg probably being 'failed to find a fit for pod'). AFAICT, it's not support yet.\nIf you remove the HostPorts, it should work for you. thanks for report.\n, thanks for your replies. I understand why it isn't working, but I'd expect one of the 2 options to happen instead of the current situation: the creation of replication controller if there are less minions than replicas, AND there is a hostPort property. In other words - validation of the content during creation. least schedule all replicas that can get a minion (in my case 2 out of 3) and leave only one at \"waiting\". Is there an issue open about the use case I describe?\nshould be true today. is less possible than it seems like-- minions can come and go, and become over or under loaded in the time between creation and when the pod shows up at the scheduler. So even if we check up front, we still have to check again later, and sometimes the first check will pass but not the second.", "positive_passages": [{"docid": "doc-en-kubernetes-dcf0382711b8744399fbfdf088830137ab3c091eba6b4a21deceaa2a9aed1bc9", "text": "expectedList: []HostPriority{{\"machine1\", 37 /* int(75% / 2) */}, {\"machine2\", 62 /* int( 75% + 50% / 2) */}}, test: \"no resources requested\", pods: []api.Pod{ {DesiredState: cpuOnly, CurrentState: machine1State}, {DesiredState: cpuAndMemory, CurrentState: machine2State}, {DesiredState: cpuOnly}, {DesiredState: cpuAndMemory}, }, }, {", "commid": "kubernetes_pr_1752"}], "negative_passages": []} {"query_id": "q-en-kubernetes-a987b5e3b3353d21d7a8120b4ee9be1106a65ade9430df500d1a08c675d1ca1a", "query": "When creating replication controller (e.g. frontend php-redis from guestbook example), scheduler gets stuck and never recovers. This is a race, I have to run several times to see the error (A script to delete controller and pods, then create new controller, leave enough time in between for k8s to react). I've created PR for an attempted fix.\nI saw this too while following the guest book and creating the php/frontend step. In my case, I had 2 minions while the replication controller configuration had configured 3 replicas, so 2 pods got scheduled -each on another minion, the third pod had empty Host (saw that when doing 'list pods'), and the 3 of them were stuck in \"waiting\" status. I deleted the controller and the pods, fixed the configuration in json file to be 2 replicas and recreated the controller and everything worked. Does your fix take care of this use case? (too many replicas, too few minions/one of the pods is not scheduled)\nThis is different from your use case. If you have more replicas than minions, then the scheduler will get stuck for sure (with the error msg probably being 'failed to find a fit for pod'). AFAICT, it's not support yet.\nIf you remove the HostPorts, it should work for you. thanks for report.\n, thanks for your replies. I understand why it isn't working, but I'd expect one of the 2 options to happen instead of the current situation: the creation of replication controller if there are less minions than replicas, AND there is a hostPort property. In other words - validation of the content during creation. least schedule all replicas that can get a minion (in my case 2 out of 3) and leave only one at \"waiting\". Is there an issue open about the use case I describe?\nshould be true today. is less possible than it seems like-- minions can come and go, and become over or under loaded in the time between creation and when the pod shows up at the scheduler. So even if we check up front, we still have to check again later, and sometimes the first check will pass but not the second.", "positive_passages": [{"docid": "doc-en-kubernetes-a4a702dd6323660ceb6ec0175d4b7ac1325e2ce32c22c8373aee95551a31c4ab", "text": "expectedList: []HostPriority{{\"machine1\", 0}, {\"machine2\", 0}}, test: \"zero minion resources\", pods: []api.Pod{ {DesiredState: cpuOnly, CurrentState: machine1State}, {DesiredState: cpuAndMemory, CurrentState: machine2State}, {DesiredState: cpuOnly}, {DesiredState: cpuAndMemory}, }, }, }", "commid": "kubernetes_pr_1752"}], "negative_passages": []} {"query_id": "q-en-kubernetes-4bb1462c8ab8a913aa37a2b81374b4383daa9203c50249aab71f7ef3e87e35d5", "query": "the file and line for these log statements is in the That's not helpful I1031 11:09:06. ] 'foo' has no storage object I1031 11:09:06. ] 'pods' is not a redirector I'd rather see Maybe use runtime.Caller in httplog to somehow fake the log to come from the caller's file/line.\nYeah-- actually this should be a feature of the glog package. Would be awesome to do:\nFollowup: glog is an opensource copy of a google-internal library; there's already something enabling this internally; we'll try to get a new version pushed externally in the next week or so. Then we can fix this.\nAwesome. On Oct 31, 2014 2:53 PM, \"Daniel Smith\" wrote:\nshould meet your requirements.\n+1, I would love to have this.", "positive_passages": [{"docid": "doc-en-kubernetes-7698bb34ffd9ec8a6d11e5fd2cee506f6fa6228ef67d6aaf07fe94d8c787c421", "text": "// Addf logs info immediately. func (passthroughLogger) Addf(format string, data ...interface{}) { glog.Infof(format, data...) glog.InfoDepth(1, fmt.Sprintf(format, data...)) } // DefaultStacktracePred is the default implementation of StacktracePred.", "commid": "kubernetes_pr_2917"}], "negative_passages": []} {"query_id": "q-en-kubernetes-4bb1462c8ab8a913aa37a2b81374b4383daa9203c50249aab71f7ef3e87e35d5", "query": "the file and line for these log statements is in the That's not helpful I1031 11:09:06. ] 'foo' has no storage object I1031 11:09:06. ] 'pods' is not a redirector I'd rather see Maybe use runtime.Caller in httplog to somehow fake the log to come from the caller's file/line.\nYeah-- actually this should be a feature of the glog package. Would be awesome to do:\nFollowup: glog is an opensource copy of a google-internal library; there's already something enabling this internally; we'll try to get a new version pushed externally in the next week or so. Then we can fix this.\nAwesome. On Oct 31, 2014 2:53 PM, \"Daniel Smith\" wrote:\nshould meet your requirements.\n+1, I would love to have this.", "positive_passages": [{"docid": "doc-en-kubernetes-ae3ad765b06428df0b1a5022cd02406f89d573d141b4922d9688e14d7bd0c4d2", "text": "// Log is intended to be called once at the end of your request handler, via defer func (rl *respLogger) Log() { latency := time.Since(rl.startTime) glog.V(2).Infof(\"%s %s: (%v) %v%v%v\", rl.req.Method, rl.req.RequestURI, latency, rl.status, rl.statusStack, rl.addedInfo) if glog.V(2) { glog.InfoDepth(1, fmt.Sprintf(\"%s %s: (%v) %v%v%v\", rl.req.Method, rl.req.RequestURI, latency, rl.status, rl.statusStack, rl.addedInfo)) } } // Header implements http.ResponseWriter.", "commid": "kubernetes_pr_2917"}], "negative_passages": []} {"query_id": "q-en-kubernetes-cbed3b419c4c061a7654b2a666cebe928bfe0af38a566faf794d9f808fca93d4", "query": "I suspect this is dues to the recent change to container VMs as the base image? ifconfig docker0 - exists but should not. iptables-save | grep docker - 4 docker rules that should not exist.\nIs this still relevant?\nwas still a containervm turd the last time I checked. It's just not causing any harm.\nYeah, no harm, just ugly. , Zach Loafman wrote:\nI think we should remove the docker0 from minion nodes, but do we still need this for master node? cc/\nThis is required by running etcd server as pod with flannel on gce nodes. Please sync your cluster with this fix.", "positive_passages": [{"docid": "doc-en-kubernetes-61d2511b799716e3fd58db162df560893be639b45fe46cdd70cc01a9fe45639f", "text": "echo \" echo 'Waiting for metadata MINION_IP_RANGE...'\" echo \" sleep 3\" echo \"done\" echo \"\" echo \"# Remove docker artifacts on minion nodes\" echo \"iptables -t nat -F\" echo \"ifconfig docker0 down\" echo \"brctl delbr docker0\" echo \"\" echo \"EXTRA_DOCKER_OPTS='${EXTRA_DOCKER_OPTS}'\" echo \"ENABLE_DOCKER_REGISTRY_CACHE='${ENABLE_DOCKER_REGISTRY_CACHE:-false}'\" grep -v \"^#\" \"${KUBE_ROOT}/cluster/gce/templates/common.sh\"", "commid": "kubernetes_pr_4976"}], "negative_passages": []} {"query_id": "q-en-kubernetes-3a47714aea9d63183fb02abaef06e69a4e2b29d7c7283e834a03eb83ec0694c5", "query": "It looks like we always try and hit the metadata server even if we aren't on GCE. We then print an error out. This should either be silent (assume after X attempts that we aren't on GCE?) or should only be activated with an option. Email thread: !topic/google-containers/w-ei5Xs6K0Y\ncc:\nFYI, the way the credentialprovider logic is set up, providers are asked once per keyring creation whether they are enabled. If they are enabled, then they are asked to provide their contribution to .dockercfg each time the lazy keyring is accessed. For efficiency, there is a caching provider, which composes with heavier providers and stores their .dockercfg contribution for a predetermined TTL. What was happening here was that \"Enabled()\" for our GCE-metadata implementations was logging about errors fetching \"http://metadata\", where the entire \"Enabled()\" check is \"err == nil\".\nI believe that this was fixed by", "positive_passages": [{"docid": "doc-en-kubernetes-b3bc097dba99bc444f17af7a6f314791a179e75e7dda6e34b492903e2787f6bd", "text": "return readDockerConfigFileFromBytes(contents) } // HttpError wraps a non-StatusOK error code as an error. type HttpError struct { StatusCode int Url string } // Error implements error func (he *HttpError) Error() string { return fmt.Sprintf(\"http status code: %d while fetching url %s\", he.StatusCode, he.Url) } func ReadUrl(url string, client *http.Client, header *http.Header) (body []byte, err error) { req, err := http.NewRequest(\"GET\", url, nil) if err != nil { glog.Errorf(\"while creating request to read %s: %v\", url, err) return nil, err } if header != nil {", "commid": "kubernetes_pr_2674"}], "negative_passages": []} {"query_id": "q-en-kubernetes-3a47714aea9d63183fb02abaef06e69a4e2b29d7c7283e834a03eb83ec0694c5", "query": "It looks like we always try and hit the metadata server even if we aren't on GCE. We then print an error out. This should either be silent (assume after X attempts that we aren't on GCE?) or should only be activated with an option. Email thread: !topic/google-containers/w-ei5Xs6K0Y\ncc:\nFYI, the way the credentialprovider logic is set up, providers are asked once per keyring creation whether they are enabled. If they are enabled, then they are asked to provide their contribution to .dockercfg each time the lazy keyring is accessed. For efficiency, there is a caching provider, which composes with heavier providers and stores their .dockercfg contribution for a predetermined TTL. What was happening here was that \"Enabled()\" for our GCE-metadata implementations was logging about errors fetching \"http://metadata\", where the entire \"Enabled()\" check is \"err == nil\".\nI believe that this was fixed by", "positive_passages": [{"docid": "doc-en-kubernetes-6b5c235cd01619ad8e36ace30dd56047eada04de549b796b673e294b4294c902", "text": "} resp, err := client.Do(req) if err != nil { glog.Errorf(\"while trying to read %s: %v\", url, err) return nil, err } defer resp.Body.Close() if resp.StatusCode != http.StatusOK { err := fmt.Errorf(\"http status code: %d while fetching url %s\", resp.StatusCode, url) glog.Errorf(\"while trying to read %s: %v\", url, err) glog.V(2).Infof(\"body of failing http response: %v\", resp.Body) return nil, err return nil, &HttpError{ StatusCode: resp.StatusCode, Url: url, } } contents, err := ioutil.ReadAll(resp.Body) if err != nil { glog.Errorf(\"while trying to read %s: %v\", url, err) return nil, err }", "commid": "kubernetes_pr_2674"}], "negative_passages": []} {"query_id": "q-en-kubernetes-3a47714aea9d63183fb02abaef06e69a4e2b29d7c7283e834a03eb83ec0694c5", "query": "It looks like we always try and hit the metadata server even if we aren't on GCE. We then print an error out. This should either be silent (assume after X attempts that we aren't on GCE?) or should only be activated with an option. Email thread: !topic/google-containers/w-ei5Xs6K0Y\ncc:\nFYI, the way the credentialprovider logic is set up, providers are asked once per keyring creation whether they are enabled. If they are enabled, then they are asked to provide their contribution to .dockercfg each time the lazy keyring is accessed. For efficiency, there is a caching provider, which composes with heavier providers and stores their .dockercfg contribution for a predetermined TTL. What was happening here was that \"Enabled()\" for our GCE-metadata implementations was logging about errors fetching \"http://metadata\", where the entire \"Enabled()\" check is \"err == nil\".\nI believe that this was fixed by", "positive_passages": [{"docid": "doc-en-kubernetes-1ce57414992a7ad2b8a61e9d6358e24d096c58632cbf3d94233c449cf6b79991", "text": "func (g *dockerConfigKeyProvider) Provide() credentialprovider.DockerConfig { // Read the contents of the google-dockercfg metadata key and // parse them as an alternate .dockercfg if cfg, err := credentialprovider.ReadDockerConfigFileFromUrl(dockerConfigKey, g.Client, metadataHeader); err == nil { if cfg, err := credentialprovider.ReadDockerConfigFileFromUrl(dockerConfigKey, g.Client, metadataHeader); err != nil { glog.Errorf(\"while reading 'google-dockercfg' metadata: %v\", err) } else { return cfg }", "commid": "kubernetes_pr_2674"}], "negative_passages": []} {"query_id": "q-en-kubernetes-3a47714aea9d63183fb02abaef06e69a4e2b29d7c7283e834a03eb83ec0694c5", "query": "It looks like we always try and hit the metadata server even if we aren't on GCE. We then print an error out. This should either be silent (assume after X attempts that we aren't on GCE?) or should only be activated with an option. Email thread: !topic/google-containers/w-ei5Xs6K0Y\ncc:\nFYI, the way the credentialprovider logic is set up, providers are asked once per keyring creation whether they are enabled. If they are enabled, then they are asked to provide their contribution to .dockercfg each time the lazy keyring is accessed. For efficiency, there is a caching provider, which composes with heavier providers and stores their .dockercfg contribution for a predetermined TTL. What was happening here was that \"Enabled()\" for our GCE-metadata implementations was logging about errors fetching \"http://metadata\", where the entire \"Enabled()\" check is \"err == nil\".\nI believe that this was fixed by", "positive_passages": [{"docid": "doc-en-kubernetes-c49221226005afebb55e5b8c82f053a20a3e44eeaaba4e089fc1c0ea9e33bb0b", "text": "// Provide implements DockerConfigProvider func (g *dockerConfigUrlKeyProvider) Provide() credentialprovider.DockerConfig { // Read the contents of the google-dockercfg-url key and load a .dockercfg from there if url, err := credentialprovider.ReadUrl(dockerConfigUrlKey, g.Client, metadataHeader); err == nil { if url, err := credentialprovider.ReadUrl(dockerConfigUrlKey, g.Client, metadataHeader); err != nil { glog.Errorf(\"while reading 'google-dockercfg-url' metadata: %v\", err) } else { if strings.HasPrefix(string(url), \"http\") { if cfg, err := credentialprovider.ReadDockerConfigFileFromUrl(string(url), g.Client, nil); err == nil { if cfg, err := credentialprovider.ReadDockerConfigFileFromUrl(string(url), g.Client, nil); err != nil { glog.Errorf(\"while reading 'google-dockercfg-url'-specified url: %s, %v\", string(url), err) } else { return cfg } } else {", "commid": "kubernetes_pr_2674"}], "negative_passages": []} {"query_id": "q-en-kubernetes-3a47714aea9d63183fb02abaef06e69a4e2b29d7c7283e834a03eb83ec0694c5", "query": "It looks like we always try and hit the metadata server even if we aren't on GCE. We then print an error out. This should either be silent (assume after X attempts that we aren't on GCE?) or should only be activated with an option. Email thread: !topic/google-containers/w-ei5Xs6K0Y\ncc:\nFYI, the way the credentialprovider logic is set up, providers are asked once per keyring creation whether they are enabled. If they are enabled, then they are asked to provide their contribution to .dockercfg each time the lazy keyring is accessed. For efficiency, there is a caching provider, which composes with heavier providers and stores their .dockercfg contribution for a predetermined TTL. What was happening here was that \"Enabled()\" for our GCE-metadata implementations was logging about errors fetching \"http://metadata\", where the entire \"Enabled()\" check is \"err == nil\".\nI believe that this was fixed by", "positive_passages": [{"docid": "doc-en-kubernetes-654cfee1af2701040a81e275a84f9457c9a5b22b4da527179ad1d1646b217f59", "text": "tokenJsonBlob, err := credentialprovider.ReadUrl(metadataToken, g.Client, metadataHeader) if err != nil { glog.Errorf(\"while reading access token endpoint: %v\", err) return cfg } email, err := credentialprovider.ReadUrl(metadataEmail, g.Client, metadataHeader) if err != nil { glog.Errorf(\"while reading email endpoint: %v\", err) return cfg }", "commid": "kubernetes_pr_2674"}], "negative_passages": []} {"query_id": "q-en-kubernetes-142e8decd8d12f564814b0a08026ec0c2500ddcd8040f0182edc6a7f51b222f8", "query": "0/8 is with a company's intranet, so it can be insecure. This is similiar to \"For GCE, allow insecure registries anywhere in 10.0.0.0/8\" Thanks!\nLinking to and .\nThis was", "positive_passages": [{"docid": "doc-en-kubernetes-5bdb878a76504e2bf239961639672bc380bb42b9bd2322c9afb7439331c98d3f", "text": "# Optional: Enable node logging. ENABLE_NODE_LOGGING=true LOGGING_DESTINATION=elasticsearch # options: elasticsearch, gcp # Don't require https for registries in our local RFC1918 network EXTRA_DOCKER_OPTS=\"--insecure-registry 10.0.0.0/8\" ", "commid": "kubernetes_pr_2620"}], "negative_passages": []} {"query_id": "q-en-kubernetes-142e8decd8d12f564814b0a08026ec0c2500ddcd8040f0182edc6a7f51b222f8", "query": "0/8 is with a company's intranet, so it can be insecure. This is similiar to \"For GCE, allow insecure registries anywhere in 10.0.0.0/8\" Thanks!\nLinking to and .\nThis was", "positive_passages": [{"docid": "doc-en-kubernetes-9d730cd43a86d1d194c08a666b7c4ea237baff9e736083247c4b512d4bfaccff", "text": "LOGGING_DESTINATION=elasticsearch # options: elasticsearch, gcp ENABLE_CLUSTER_MONITORING=false # Don't require https for registries in our local RFC1918 network EXTRA_DOCKER_OPTS=\"--insecure-registry 10.0.0.0/8\" ", "commid": "kubernetes_pr_2620"}], "negative_passages": []} {"query_id": "q-en-kubernetes-142e8decd8d12f564814b0a08026ec0c2500ddcd8040f0182edc6a7f51b222f8", "query": "0/8 is with a company's intranet, so it can be insecure. This is similiar to \"For GCE, allow insecure registries anywhere in 10.0.0.0/8\" Thanks!\nLinking to and .\nThis was", "positive_passages": [{"docid": "doc-en-kubernetes-e2b8f67c6c6914e7094a331cffc57628b1c018982fe24aca865cb2743d3b3c14", "text": "cloud: gce EOF DOCKER_OPTS=\"\" if [[ -n \"${EXTRA_DOCKER_OPTS-}\" ]]; then DOCKER_OPTS=\"${EXTRA_DOCKER_OPTS}\" fi # Decide if enable the cache if [[ \"${ENABLE_DOCKER_REGISTRY_CACHE}\" == \"true\" ]]; then if [[ \"${ENABLE_DOCKER_REGISTRY_CACHE}\" == \"true\" ]]; then REGION=$(echo \"${ZONE}\" | cut -f 1,2 -d -) echo \"Enable docker registry cache at region: \" $REGION DOCKER_OPTS=\"--registry-mirror=\"https://${REGION}.docker-cache.clustermaster.net\"\" DOCKER_OPTS=\"${DOCKER_OPTS} --registry-mirror='https://${REGION}.docker-cache.clustermaster.net'\" fi cat <>/etc/salt/minion.d/grains.conf if [[ -n \"{DOCKER_OPTS}\" ]]; then cat <>/etc/salt/minion.d/grains.conf docker_opts: $DOCKER_OPTS EOF fi", "commid": "kubernetes_pr_2620"}], "negative_passages": []} {"query_id": "q-en-kubernetes-142e8decd8d12f564814b0a08026ec0c2500ddcd8040f0182edc6a7f51b222f8", "query": "0/8 is with a company's intranet, so it can be insecure. This is similiar to \"For GCE, allow insecure registries anywhere in 10.0.0.0/8\" Thanks!\nLinking to and .\nThis was", "positive_passages": [{"docid": "doc-en-kubernetes-5d98b5dccb9509ca8d0bdfb67b6e15c16ed4a7753c80e3325ec0e19fe46263d7", "text": "echo \"ZONE='${ZONE}'\" echo \"MASTER_NAME='${MASTER_NAME}'\" echo \"MINION_IP_RANGE='${MINION_IP_RANGES[$i]}'\" echo \"EXTRA_DOCKER_OPTS='${EXTRA_DOCKER_OPTS}'\" echo \"ENABLE_DOCKER_REGISTRY_CACHE='${ENABLE_DOCKER_REGISTRY_CACHE:-false}'\" grep -v \"^#\" \"${KUBE_ROOT}/cluster/gce/templates/common.sh\" grep -v \"^#\" \"${KUBE_ROOT}/cluster/gce/templates/salt-minion.sh\"", "commid": "kubernetes_pr_2620"}], "negative_passages": []} {"query_id": "q-en-kubernetes-142e8decd8d12f564814b0a08026ec0c2500ddcd8040f0182edc6a7f51b222f8", "query": "0/8 is with a company's intranet, so it can be insecure. This is similiar to \"For GCE, allow insecure registries anywhere in 10.0.0.0/8\" Thanks!\nLinking to and .\nThis was", "positive_passages": [{"docid": "doc-en-kubernetes-fe8f820400f1edc5d05f4f6403cece3f14d841eec9396013f16753e8c8aa16bf", "text": " DOCKER_OPTS=\"\" {% if grains.docker_opts is defined %} {% set docker_opts = grains.docker_opts %} {% else %} {% set docker_opts = \"\" %} DOCKER_OPTS=\"${DOCKER_OPTS} {{grains.docker_opts}}\" {% endif %} DOCKER_OPTS=\"{{docker_opts}} --bridge cbr0 --iptables=false --ip-masq=false -r=false\" DOCKER_OPTS=\"${DOCKER_OPTS} --bridge cbr0 --iptables=false --ip-masq=false -r=false\" ", "commid": "kubernetes_pr_2620"}], "negative_passages": []} {"query_id": "q-en-kubernetes-c802a37aad85dfdce2d6232f30f1e039273d58e19f8aa63b8acb725eeb2b6040", "query": "it's impossible to GET replication controllers on v3. on v1 on the same server it works fine. example request: GET on http://localhost:8080/api/v1beta3/replicationControllers returns 404 http://localhost:8080/api/v1beta1/replicationControllers returns 200\nv1beta3 moved to all lower case URL Would give that a try first Sent from my iPhone\nlooks like it's working. but it breaks backwards for clients who worked with v1. For instance, in my gem I can't now smoothly work with both versions because I need to treat the name of the resource differently. why don't at least redirect those who call replicationControllers to the lowercase too? Also, there's no way to know which resources are available and their names. A lot of REST api on their most top level at least publish the links to main resources, but if I call: http://localhost:8080/api/v1beta3/ it returns 404\nBecause we are actively trying to drop support for mixed case prior to 1.0, I did not continue supporting the old url paths. If you look at the go client, we have a \"legacy\" flag we use to control this and a few other behaviors.\nwell yes, but breaking backwards creates more work for the users... more \"if this version ...\" kind of stuff.\nWe got v1bet1/2 wrong and don't want to carry it forever. Stopping it early before v1 is our priority.\nok :( are the REST apis going through any kind of design review or code review?\nv1beta1 is really an alpha quality API. It grew organically and did not go through any kind of design review, as evidenced by the many issues you've filed. We want to get rid of it ASAP. We intend v1beta3 to be the \"release candidate\" for the v1 API, which will be the stable API. As for knowing what exists, you can now browse or GET . Additionally, pull is in progress, and will return a list of valid paths to GET .\nthanks. will check swagger for now", "positive_passages": [{"docid": "doc-en-kubernetes-fe567d9cf78409720bcbb87b1ad6339cb18b53dd88041da1d575419b6cc2df05", "text": " # The Kubernetes API Primary system and API concepts are documented in the [User guide](user-guide.md). Overall API conventions are described in the [API conventions doc](api-conventions.md). Complete API details are documented via [Swagger](http://swagger.io/). The Kubernetes apiserver (aka \"master\") exports an API that can be used to retrieve the [Swagger spec](https://github.com/swagger-api/swagger-spec/tree/master/schemas/v1.2) for the Kubernetes API, by default at `/swaggerapi`, and a UI you can use to browse the API documentation at `/swaggerui`. We also periodically update a [statically generated UI](http://kubernetes.io/third_party/swagger-ui/). Remote access to the API is discussed in the [access doc](accessing_the_api.md). The Kubernetes API also serves as the foundation for the declarative configuration schema for the system. The [Kubectl](kubectl.md) command-line tool can be used to create, update, delete, and get API objects. Kubernetes also stores its serialized state (currently in [etcd](https://coreos.com/docs/distributed-configuration/getting-started-with-etcd/)) in terms of the API resources. Kubernetes itself is decomposed into multiple components, which interact through its API. ## API changes In our experience, any system that is successful needs to grow and change as new use cases emerge or existing ones change. Therefore, we expect the Kubernetes API to continuously change and grow. However, we intend to not break compatibility with existing clients, for an extended period of time. In general, new API resources and new resource fields can be expected to be added frequently. Elimination of resources or fields will require following a deprecation process. The precise deprecation policy for eliminating features is TBD, but once we reach our 1.0 milestone, there will be a specific policy. What constitutes a compatible change and how to change the API are detailed by the [API change document](devel/api_changes.md). ## API versioning Fine-grain resource evolution alone makes it difficult to eliminate fields or restructure resource representations. Therefore, Kubernetes supports multiple API versions, each at a different API path prefix, such as `/api/v1beta3`. These are simply different interfaces to read and/or modify the same underlying resources. In general, all API resources are accessible via all API versions, though there may be some cases in the future where that is not true. Distinct API versions present more clear, consistent views of system resources and behavior than intermingled, independently evolved resources. They also provide a more straightforward mechanism for controlling access to end-of-lifed and/or experimental APIs. The [API and release versioning proposal](versioning.md) describes the current thinking on the API version evolution process. ## v1beta1 and v1beta2 are deprecated; please move to v1beta3 ASAP As of April 1, 2015, the Kubernetes v1beta3 API has been enabled by default, and the v1beta1 and v1beta2 APIs are deprecated. v1beta3 should be considered the v1 release-candidate API, and the v1 API is expected to be substantially similar. As \"pre-release\" APIs, v1beta1, v1beta2, and v1beta3 will be eliminated once the v1 API is available, by the end of June 2015. ## v1beta3 conversion tips We're working to convert all documentation and examples to v1beta3. Most examples already contain a v1beta3 subdirectory with the API objects translated to v1beta3. A simple [API conversion tool](cluster_management.md#switching-your-config-files-to-a-new-api-version) has been written to simplify the translation process. Use `kubectl create --validate` in order to validate your json or yaml against our Swagger spec. Some important differences between v1beta1/2 and v1beta3: * The resource `id` is now called `name`. * `name`, `labels`, `annotations`, and other metadata are now nested in a map called `metadata` * `desiredState` is now called `spec`, and `currentState` is now called `status` * `/minions` has been moved to `/nodes`, and the resource has kind `Node` * The namespace is required (for all namespaced resources) and has moved from a URL parameter to the path: `/api/v1beta3/namespaces/{namespace}/{resource_collection}/{resource_name}` * The names of all resource collections are now lower cased - instead of `replicationControllers`, use `replicationcontrollers`. * To watch for changes to a resource, open an HTTP or Websocket connection to the collection URL and provide the `?watch=true` URL parameter along with the desired `resourceVersion` parameter to watch from. * The container `entrypoint` has been renamed to `command`, and `command` has been renamed to `args`. * Container, volume, and node resources are expressed as nested maps (e.g., `resources{cpu:1}`) rather than as individual fields, and resource values support [scaling suffixes](resources.md#resource-quantities) rather than fixed scales (e.g., milli-cores). * Restart policy is represented simply as a string (e.g., \"Always\") rather than as a nested map (\"always{}\"). * The volume `source` is inlined into `volume` rather than nested. ", "commid": "kubernetes_pr_6391"}], "negative_passages": []} {"query_id": "q-en-kubernetes-988dd0ceccf3dcde8b8ea1670f138b61545b1a8185bf62f096fafbe67e654232", "query": "For example, kubernetes-ro service appears with empty selector (this service was created by the hack/local cluster) - is that ok? What meaning such service has if it has no entity to fwd traffic to?\nkubernetes and kubernetes-ro are currently \"virtual\" - they are not backed by running pods, but by the master itself. , abonas wrote:\nif that's internal impl, why is it exposed to user? let's say a client pulls all entities and builds a UI around it. then a service with no selector arrives, and it's internal to kube - how the client should know not to present it/touch it? (similar to the \"net\" container in pods)\nIt's exposed exactly FOR users - so they can find the apiserver. It has a corresponding Endpoints object, it gets a portal IP and a kube-proxy rule, and it gets service env vars and a DNS name. The fact that it is virtual is an implementation detail. That it exists is all users should need to know. , abonas wrote:\nAnd FWIW, the master components may one day actually be pods... , Tim Hockin wrote:\nthanks for the explanation. so any general rule how to distinguish kube pods/services from user defined?\nDo we need to distinguish them? We could add labels, I guess, if it is really important to distinguish them.\nsometimes users would want to see only the things they defined and not implementation details of the system.\nOne could argue this is why the users should be using labels. I'm not against defining labels for \"special\" stuff, but it's new territory, so we should enter with caution. , abonas wrote:\nI think the right answer might be to put them in a special \"cluster\" or \"system\" namespace. , Tim Hockin wrote:\nSee If we put them in a \"kubernetes\" ns, the DNS name would be :) I'm not against segmenting them into a different namespace. We just need to decide on what that namespace is called, plumb through that configuration, decide on compatibility for a short while or not, and decide if that namespace is \"special\" or just \"different\" - i.e. should the skydns, heapster, etc pods go into this namespace? , Brendan Burns wrote:\nAs said, the services are there for applications that need to contact the apiserver. Additionally, nil selectors can be used by users to import external services. Adding labels to the kubernetes services won't help. Users need to manage their objects using labels. /cc\nYeah, lots of consumers will expose \"special\" services. IN fact, in many clusters the \"special\" services will outweigh the unspecial ones. At this point, the namespace they are in is completely arbitrary, and I think because it is a name, it should be free for the end user to choose where they are surfaced.\nPer the proposal here: , I think the namespace should continue to hold these services, and they require no special labels. I sense comment is made without thinking through what Kubernetes operators do to bootstrap a cluster. I think it's reasonable that namespace holds everything you need to run, and operators may or may not choose to segment their data in other namespaces. If they want to not see this information, its an exact reason for why they should use their own namespace. I don't think we need anything new here.\n- I'm not clear on your meaning. Did you mean that we should make the \"cluster admin\" namespace a config param that everyone knows about and can be different cluster-to-cluster? Or did you mean that default is as arbitrary a name as any other? I'd be fine moving things out of default into \"kube\" or \"kubernetes\" or \"cluster\" or something, but I am less convinced that a flag is a good thing. , Clayton Coleman wrote:\n- wherever we stick this stuff, it should be the namespace that is initially bootstrapped, which today is . I continue to believe that is as good a name as any other. I do not see a need to do more here.\nI think as soon as you bring naming into the picture, making decisions based on names gets very tricky (what if I want my logging service to be everywhere?). If there is a magic namespace that exposes things to the rest of the cluster, we should ask whether there's only one magic namespace, or multiple? Are services supposed to cross namespaces? It's great that pods can find the master... but that's not the only thing they'll need to find in each namespace. And baking in special names (or customizable names) into your app code is fragile, because then your code only works inside Kubernetes. I'm just trying to pump the brakes on special services prior to us designing them... :) ----- Original Message -----\noh, to be clear, I don't want \"special\" services - I am merely asking if moving non-user-defined objects (where that is defined very loosely) out of \"default\" is worthwhile. I tend to agree with Derek that it's not worth energy. , Clayton Coleman wrote:\nagree with you that lots of consumers will expose \"special\" services We should not use the (namespace + k8s-service + skydns) mechanism for publishing the location of those special services. We should decouple the choice of namespace to run the pods of the service in from the well-known dns address of special services. Otherwise, it will be hard to move the pods of the service to another namespace, which people will definitely want to do as they grow. Instead, we need a place to publish well-known services which is not tied to the namespace of the pods of that service.\nyou open a whole new topic. I've noodled with the idea of a Name resource type. We would generate a Name for each Service as \"service.namespace\", but users could also add their own Name resources. , Eric Tune wrote:\nI'd like to understand better your comment regarding nil selectors used to export external services - can you elaborate? give an example? my understanding is that service redirects traffic to pods based on labels. with nil selector service does nothing, so why is it a service in the first place? I'm pro decoupling special services/internal implementation services from user defined entities. whether by separate namespace, or by saying that the default namespace is reserved, or by any other solution. because otherwise internal implementation and user defined content are mixed and that's no good for a proper management. just tagging you so you could see this issue and the related discussions.\nServices abstract backends - they don't have to abstract pods. Some examples: You want to have an external database cluster in production, but in test you use your own databases. You would have a service for both and use the environment variables, but the production service has a nil selector with its endpoints set to the database cluster. You want to have an empty service because you're reserving a name (that isn't accessible yet) but will be in the future You want to point your service to a service in another namespace or on another cluster. You set the endpoints for your service to the publicIPs of the other service. ----- Original Message -----\n, thanks. regarding the last point - I was under impression (based on some other discussion) that there would not be cross namespace interaction? that entities can can access only entities in the same namespace?\nCross-namespace is currently allowed and I think has to be allowed, though perhaps (in the fullness of time) that can be policy based rather than wide open. , abonas wrote:\nplease see this correspondence. my understanding is that there is isolation\nResponded there. I think there's some less-than-crystal-clear words in play. , abonas wrote:\n+1 to Eric's comment about ensuring there is a consistent manner for system services like kubernetes-ro, if that's not fixes for all clusters it's going to be a nightmare to write reusable containers for multiple different clusters. I don't care if we do that by using a constant namespace, or by giving system services two different names, but there needs to be at least one name that is constant. Regarding default vs. cluster namespace, I think the issue is one of clutter as a user is starting to understand k8s, you turn up a new cluster, launch a pod, or service, and it's hard to find your one object on the sea of system created pods and services. It's visual clutter that confuses people as they onboard. On Jan 23, 2015 3:01 PM, \"Tim Hockin\" wrote:\n- can you drop me a line (my github name at google) - I want to ask you something, and you have no contact info :) , abonas wrote:\nsorry for naive question but why would I care as an user that wants to run its existing workload on Kubernetes about kubernetes-ro ? If I write an application that uses kubernetes-ro then I'm locked into Kubernetes. I understand the need for kubernetes-ro being accessible by cluster level services such as monitoring, logging. I also can imagine that a power user might want to start its own cluster services that use kubernetes-ro. I don't see clearly why would majority of users care about kubernetes-ro being visible from their container. I see a reason for the cluster services being visible and accessible for Kubernetes developer, but from an user perspective such services only make the system more complex: why do I see these services (pods,rc) ? are they related to my application ? should I care about them (especially when I use hosted Kubernetes) ? why does my container get all these environment variables for cluster services ?\nI will update file to explain why one would have service without a selector.\nRight now, every service goes everywhere for simplicity. Name spacing and filtering probably make sense in the long run, but it's not a high priority for v1.0 Brendan On Mar 4, 2015 5:02 AM, \"Rafa\u0142 Soko\u0142owski\" wrote:", "positive_passages": [{"docid": "doc-en-kubernetes-a44f6cb5e869eccf248b8ac505e27244011ef25c2c80e1233583629299af7a4f", "text": "`service`, a client can simply connect to $MYAPP_SERVICE_HOST on port $MYAPP_SERVICE_PORT. ## Service without selector Services, in addition to providing clean abstraction to access pods, can also abstract any kind of backend: - you want to have an external database cluster in production, but in test you use your own databases. - you want to point your service to a service in another [`namespace`](namespaces.md) or on another cluster. - you are migrating your workload to Kubernetes and some of your backends run outside of Kubernetes. In any of these scenarios you can define a service without a selector: ```json \"kind\": \"Service\", \"apiVersion\": \"v1beta1\", \"id\": \"myapp\", \"port\": 8765 ``` then you can explicitly map the service to a specific endpoint(s): ```json \"kind\": \"Endpoints\", \"apiVersion\": \"v1beta1\", \"id\": \"myapp\", \"endpoints\": [\"173.194.112.206:80\"] ``` Access to the service without a selector works the same as if it had selector. The traffic will be routed to endpoints defined by the user (`173.194.112.206:80` in case of this example). ## How do they work? Each node in a Kubernetes cluster runs a `service proxy`. This application", "commid": "kubernetes_pr_5024"}], "negative_passages": []} {"query_id": "q-en-kubernetes-e9e89a316cb39ee57596a3170474f837d1fc36ca6f45ccf3902663a7708546fe", "query": "As long as we have it in the API, it should maybe be renamed to gcePersistentDisk ?\nSGTM", "positive_passages": [{"docid": "doc-en-kubernetes-625074686ee19c90e3f328d639a9cbdc472bd6a9cc867c967232109cc7c209a0", "text": "EmptyDir *EmptyDir `json:\"emptyDir\"` // GCEPersistentDisk represents a GCE Disk resource that is attached to a // kubelet's host machine and then exposed to the pod. GCEPersistentDisk *GCEPersistentDisk `json:\"persistentDisk\"` GCEPersistentDisk *GCEPersistentDisk `json:\"gcePersistentDisk\"` // GitRepo represents a git repository at a particular revision. GitRepo *GitRepo `json:\"gitRepo\"` }", "commid": "kubernetes_pr_3900"}], "negative_passages": []} {"query_id": "q-en-kubernetes-7289a91ca3371479da7fb74d0ceced629b874ff7b3b9f9b433395853883821d6", "query": "This whole set of tests (if it can't be run) should be omitted from the test execution, otherwise folks running on Mac can't verify everything else is clean in a sane fashion. Passive aggressive assign to Tim, I may get around to fixing this before he does.\nLast I checked Travis + Go didn't work. We should check again, and enable OS X builds in Travis if it's possible.\nYou could argue that if the unsupported mount () is being called in a test, there is a problem anyway - it has to run as root. Will investigate. On Jan 25, 2015 9:09 AM, \"Brendan Burns\" wrote:\nWithout a Mac to test on, I feel like this would be better fixed by someone who uses Mac... Probably: just rename to I just discovered \"implicit build constraints\", sigh. Go is crossing from user-friendly to sort of absurd, IMO.", "positive_passages": [{"docid": "doc-en-kubernetes-3dced0d2b2fe488bc7ddcbc632906d4f82fbd404f84d89e75d2a5d4b449d937e", "text": " // +build !windows /* Copyright 2014 Google Inc. All rights reserved. Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ package gce_pd import ( \"os\" \"syscall\" ) // Determine if a directory is a mountpoint, by comparing the device for the directory // with the device for it's parent. If they are the same, it's not a mountpoint, if they're // different, it is. func isMountPoint(file string) (bool, error) { stat, err := os.Stat(file) if err != nil { return false, err } rootStat, err := os.Lstat(file + \"/..\") if err != nil { return false, err } // If the directory has the same device as parent, then it's not a mountpoint. return stat.Sys().(*syscall.Stat_t).Dev != rootStat.Sys().(*syscall.Stat_t).Dev, nil } ", "commid": "kubernetes_pr_3827"}], "negative_passages": []} {"query_id": "q-en-kubernetes-7289a91ca3371479da7fb74d0ceced629b874ff7b3b9f9b433395853883821d6", "query": "This whole set of tests (if it can't be run) should be omitted from the test execution, otherwise folks running on Mac can't verify everything else is clean in a sane fashion. Passive aggressive assign to Tim, I may get around to fixing this before he does.\nLast I checked Travis + Go didn't work. We should check again, and enable OS X builds in Travis if it's possible.\nYou could argue that if the unsupported mount () is being called in a test, there is a problem anyway - it has to run as root. Will investigate. On Jan 25, 2015 9:09 AM, \"Brendan Burns\" wrote:\nWithout a Mac to test on, I feel like this would be better fixed by someone who uses Mac... Probably: just rename to I just discovered \"implicit build constraints\", sigh. Go is crossing from user-friendly to sort of absurd, IMO.", "positive_passages": [{"docid": "doc-en-kubernetes-0c4488e8c76d6924ce0614a130538ccd156d3678806d22eb36812f8700034843", "text": " // +build !windows /* Copyright 2014 Google Inc. All rights reserved. Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ package gce_pd import ( \"os\" \"syscall\" ) // Determine if a directory is a mountpoint, by comparing the device for the directory // with the device for it's parent. If they are the same, it's not a mountpoint, if they're // different, it is. func isMountPoint(file string) (bool, error) { stat, err := os.Stat(file) if err != nil { return false, err } rootStat, err := os.Lstat(file + \"/..\") if err != nil { return false, err } // If the directory has the same device as parent, then it's not a mountpoint. return stat.Sys().(*syscall.Stat_t).Dev != rootStat.Sys().(*syscall.Stat_t).Dev, nil } ", "commid": "kubernetes_pr_3827"}], "negative_passages": []} {"query_id": "q-en-kubernetes-7289a91ca3371479da7fb74d0ceced629b874ff7b3b9f9b433395853883821d6", "query": "This whole set of tests (if it can't be run) should be omitted from the test execution, otherwise folks running on Mac can't verify everything else is clean in a sane fashion. Passive aggressive assign to Tim, I may get around to fixing this before he does.\nLast I checked Travis + Go didn't work. We should check again, and enable OS X builds in Travis if it's possible.\nYou could argue that if the unsupported mount () is being called in a test, there is a problem anyway - it has to run as root. Will investigate. On Jan 25, 2015 9:09 AM, \"Brendan Burns\" wrote:\nWithout a Mac to test on, I feel like this would be better fixed by someone who uses Mac... Probably: just rename to I just discovered \"implicit build constraints\", sigh. Go is crossing from user-friendly to sort of absurd, IMO.", "positive_passages": [{"docid": "doc-en-kubernetes-c354a22747dabbc29c10835a65caaaf90444e20a88e58aeb5b00ffd068dcb814", "text": " // +build !linux // +build windows /* Copyright 2014 Google Inc. All rights reserved.", "commid": "kubernetes_pr_3827"}], "negative_passages": []} {"query_id": "q-en-kubernetes-d0f38e058dd83a69064953b4bc0c2c3ffa6cae2ccfd3a071336c45d6b39f4940", "query": "wrote: I just ran across this the other day. but I am seeing the error in the service. Here is a sample of the output. I am running v0.8.2. I can get the behavior to go away temporarily by rebooting the node or restarting the service but it always seems to come back eventually.", "positive_passages": [{"docid": "doc-en-kubernetes-404ceebe195f6393af475b3e59bce772c69e27b5c00cbcacfe7a7e702c9a9a29", "text": "} if event.Type == watch.Error { util.HandleError(fmt.Errorf(\"error from watch during sync: %v\", errors.FromObject(event.Object))) // Clear the resource version, this may cause us to skip some elements on the watch, // but we'll catch them on the synchronize() call, so it works out. *resourceVersion = \"\" continue } glog.V(4).Infof(\"Got watch: %#v\", event) rc, ok := event.Object.(*api.ReplicationController) if !ok { if status, ok := event.Object.(*api.Status); ok { if status.Status == api.StatusFailure { glog.Errorf(\"failed to watch: %v\", status) // Clear resource version here, as above, this won't hurt consistency, but we // should consider introspecting more carefully here. (or make the apiserver smarter) // \"why not both?\" *resourceVersion = \"\" continue } } util.HandleError(fmt.Errorf(\"unexpected object: %#v\", event.Object)) continue }", "commid": "kubernetes_pr_4102"}], "negative_passages": []} {"query_id": "q-en-kubernetes-4be2baac657b91c8150c9eaeb075c4073519ae9c3e811b2f182b2a22b3be9af8", "query": "We found yesterday that cAdvisor wasn't being started properly in 0.9.0 and 0.9.1 due to an issue with manifest file format changes (, fixed in ). If any of our e2e tests even checked that cAdvisor was running or exposing metrics on all nodes, it would have been caught before release. We should add at least some coverage for it to catch bugs like this.\nIf any of you are up for adding an e2e test for cadvisor, that'd be much appreciated!\nI will work on adding one for cadvisor. , Alex Mohr wrote:\nWe already have a e2e test, which validate monitoring of Clusters using cAdvisor. We should figure out why the test passed without detecting such failure, and improve that test.\nThe existing test will succeed even if some of the nodes do not have cadvisor running. We can enhance the monitoring test to specifically check for cadvisor's health as well. , Dawn Chen wrote:\nThis test seems to be failing deterministically.\nThis was after fix to retry, but we're not exactly sure why.\nOnce I have emerged from the e2e test for cluster level logging with Elasticsearch quagmire (soon I hope -- perhaps by Wednesday) then I might also be willing to lend a hand with this since some of the experience might be relevant.\nActually latest error looks like So it's still not coming up after 5 minutes. The question is how long should it be taking to come up, given that the cadvisor containers get scheduled basically on node creation. The test finished in < 1s on my e2e cluster that has been up for awhile.\nOr there's an actual bug here. can you look into it?\n(I looked at the Jenkins log, and this test ran a good 10 minutes after startup, so it probably had 15 minutes to finish coming up. That seems wrong, given that cAdvisor is actually the first container we ever schedule, and it gets scheduled raw.)\nSorry, I hadn't seen -- not sure if that fixes this?\nNo, there are failures after as well. On Feb 20, 2015 5:39 PM, \"David Oppenheimer\" wrote:\nI don't see any issue with the test itself. The test attempts to access a REST endpoint ('/stats') via the proxy on the api-server. Is it possible that the api-server doesn't get to a stable state for a while?\nThis test executes after several others that hit the apiserver? On Feb 22, 2015 4:15 PM, \"Vish Kannan\" wrote:\nNoob question: How can we tell if it runs this test after the other tests?\nYou can't tell deterministically (i.e. it's not hardcoded). I was merely commenting that in several runs where this fails, it has run after many other tests. We've been randomizing the spec order for a while. You can look at the Jenkins console logs to get the random seed for a given run: And the console logs have the order just by manually scanning the horizontal bars, but it's kind of pain. (That seed was off of go/k8s-test/job/kubernetes-e2e-gce/2638/consoleFull)\nThe reason for this failure is . I am closing this issue since the test itself is fine.", "positive_passages": [{"docid": "doc-en-kubernetes-d998d01d6651bff9e21e9c2387fc7a64b5509059ec20c69804d524105991c41c", "text": " /* Copyright 2015 Google Inc. All rights reserved. Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ package e2e import ( \"fmt\" \"time\" \"github.com/GoogleCloudPlatform/kubernetes/pkg/client\" . \"github.com/onsi/ginkgo\" ) var _ = Describe(\"Cadvisor\", func() { var c *client.Client BeforeEach(func() { var err error c, err = loadClient() expectNoError(err) }) It(\"cadvisor should be healthy on every node.\", func() { CheckCadvisorHealthOnAllNodes(c) }) }) func CheckCadvisorHealthOnAllNodes(c *client.Client) { By(\"getting list of nodes\") nodeList, err := c.Nodes().List() expectNoError(err) for _, node := range nodeList.Items { // cadvisor is not accessible directly unless its port (4194 by default) is exposed. // Here, we access '/stats/' REST endpoint on the kubelet which polls cadvisor internally. statsResource := fmt.Sprintf(\"api/v1beta1/proxy/minions/%s/stats/\", node.Name) By(fmt.Sprintf(\"Querying stats from node %s using url %s\", node.Name, statsResource)) _, err = c.Get().AbsPath(statsResource).Timeout(1 * time.Second).Do().Raw() expectNoError(err) } } ", "commid": "kubernetes_pr_4506"}], "negative_passages": []} {"query_id": "q-en-kubernetes-9c9d5bfd1ce2491596aef565e138702c57296cb976abe91eb5a2e9202f8205ed", "query": "W0128 18:44:26. ] Pod from test failed validation, ignoring: namespace: required value '' E0128 18:44:26. ] Could not construct reference to: '&api.BoundPod{TypeMeta:api.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:api.ObjectMeta{Name:\"foo\", Namespace:\"\", SelfLink:\"\", UID:\"foo\", ResourceVersion:\"\", CreationTimestamp:util.Time{Time:time.Time{sec:0, nsec:0x0, loc:(time.Location)(nil)}}, Labels:map[string]string(nil), Annotations:map[string]string{\"\":\"\"}}, Spec:api.PodSpec{Volumes:[]api.Volume(nil), Containers:[]api.Container(nil), RestartPolicy:api.RestartPolicy{Always:(api.RestartPolicyAlways)(0xe3d670), OnFailure:(api.RestartPolicyOnFailure)(nil), Never:(api.RestartPolicyNever)(nil)}, DNSPolicy:\"ClusterFirst\", NodeSelector:map[string]string(nil), Host:\"\"}}' due to: 'selfLink was empty, can't make reference'. Will not report event: 'failedValidation' 'Error validating pod foo. case <-time.After(2 * time.Millisecond): case <-time.After(time.Second): t.Errorf(\"Expected update, timeout instead\") } }", "commid": "kubernetes_pr_3887"}], "negative_passages": []} {"query_id": "q-en-kubernetes-9c9d5bfd1ce2491596aef565e138702c57296cb976abe91eb5a2e9202f8205ed", "query": "W0128 18:44:26. ] Pod from test failed validation, ignoring: namespace: required value '' E0128 18:44:26. ] Could not construct reference to: '&api.BoundPod{TypeMeta:api.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:api.ObjectMeta{Name:\"foo\", Namespace:\"\", SelfLink:\"\", UID:\"foo\", ResourceVersion:\"\", CreationTimestamp:util.Time{Time:time.Time{sec:0, nsec:0x0, loc:(time.Location)(nil)}}, Labels:map[string]string(nil), Annotations:map[string]string{\"\":\"\"}}, Spec:api.PodSpec{Volumes:[]api.Volume(nil), Containers:[]api.Container(nil), RestartPolicy:api.RestartPolicy{Always:(api.RestartPolicyAlways)(0xe3d670), OnFailure:(api.RestartPolicyOnFailure)(nil), Never:(api.RestartPolicyNever)(nil)}, DNSPolicy:\"ClusterFirst\", NodeSelector:map[string]string(nil), Host:\"\"}}' due to: 'selfLink was empty, can't make reference'. Will not report event: 'failedValidation' 'Error validating pod foo. case <-time.After(2 * time.Millisecond): case <-time.After(time.Second): t.Errorf(\"Expected update, timeout instead\") } }", "commid": "kubernetes_pr_3887"}], "negative_passages": []} {"query_id": "q-en-kubernetes-48b6be9a789fe76ef913476cd055a1bb3da8978c4034069a469636df6a1d0695", "query": "writes: As we start looking into etcd and event memory pressure [], we should look into switching over to etcd 2.0 which was released today.\ncc/\n+1 On Jan 28, 2015 5:29 PM, \"Dawn Chen\" wrote:\nMy long-running cluster tests resulted in a hard-crash on etcd: We should move to 2.0 ASAP in the hopes of (a) fixing the issue and if not (b) prioritizing the fix since we'll be complaining about head. Abhi, is this something you can pick up and drive to land in O(days) please?\nCalling this P0 as it blocks further long-running cluster stability testing.\nI will do this today. , Alex Mohr wrote:", "positive_passages": [{"docid": "doc-en-kubernetes-a16830786ad922ae371a556faf596d13c124c91ae32b8e6ca7d5a57ee9b59ce6", "text": "# shasum # 6. Update this file with new tar version and new hash {% set etcd_version=\"v0.4.6\" %} {% set etcd_version=\"v2.0.0\" %} {% set etcd_tar_url=\"https://storage.googleapis.com/kubernetes-release/etcd/etcd-%s-linux-amd64.tar.gz\" | format(etcd_version) %} {% set etcd_tar_hash=\"sha1=5db514e30b9f340eda00671230d5136855ae14d7\" %} {% set etcd_tar_hash=\"sha1=b3cd41d1748bf882a58a98c9585fd5849b943811\" %} etcd-tar: archive:", "commid": "kubernetes_pr_3960"}], "negative_passages": []} {"query_id": "q-en-kubernetes-48b6be9a789fe76ef913476cd055a1bb3da8978c4034069a469636df6a1d0695", "query": "writes: As we start looking into etcd and event memory pressure [], we should look into switching over to etcd 2.0 which was released today.\ncc/\n+1 On Jan 28, 2015 5:29 PM, \"Dawn Chen\" wrote:\nMy long-running cluster tests resulted in a hard-crash on etcd: We should move to 2.0 ASAP in the hopes of (a) fixing the issue and if not (b) prioritizing the fix since we'll be complaining about head. Abhi, is this something you can pick up and drive to land in O(days) please?\nCalling this P0 as it blocks further long-running cluster stability testing.\nI will do this today. , Alex Mohr wrote:", "positive_passages": [{"docid": "doc-en-kubernetes-683e6658148ffee96574ea294ab137472d5de55ee1fce20d75fcd621f3d6478f", "text": "DESC=\"The etcd key-value share configuration service\" NAME=etcd DAEMON=/usr/local/bin/$NAME DAEMON_ARGS=\"-peer-addr $HOSTNAME:7001 -name $HOSTNAME\" # DAEMON_ARGS=\"-peer-addr $HOSTNAME:7001 -name $HOSTNAME\" host_ip=$(hostname -i) DAEMON_ARGS=\"-addr ${host_ip}:4001 -bind-addr ${host_ip}:4001 -data-dir /var/etcd -initial-advertise-peer-urls http://${HOSTNAME}:2380 -name ${HOSTNAME} -initial-cluster ${HOSTNAME}=http://${HOSTNAME}:2380\" DAEMON_LOG_FILE=/var/log/$NAME.log PIDFILE=/var/run/$NAME.pid SCRIPTNAME=/etc/init.d/$NAME", "commid": "kubernetes_pr_3960"}], "negative_passages": []} {"query_id": "q-en-kubernetes-48b6be9a789fe76ef913476cd055a1bb3da8978c4034069a469636df6a1d0695", "query": "writes: As we start looking into etcd and event memory pressure [], we should look into switching over to etcd 2.0 which was released today.\ncc/\n+1 On Jan 28, 2015 5:29 PM, \"Dawn Chen\" wrote:\nMy long-running cluster tests resulted in a hard-crash on etcd: We should move to 2.0 ASAP in the hopes of (a) fixing the issue and if not (b) prioritizing the fix since we'll be complaining about head. Abhi, is this something you can pick up and drive to land in O(days) please?\nCalling this P0 as it blocks further long-running cluster stability testing.\nI will do this today. , Alex Mohr wrote:", "positive_passages": [{"docid": "doc-en-kubernetes-4897f940020ebbc6d789c1b604af7826318164ed84f0fcb9f572550b09d8a6ca", "text": "exit 1 fi version=$(etcd -version | cut -d \" \" -f 3) if [[ \"${version}\" < \"2.0.0\" ]]; then kube::log::usage \"etcd version 2.0.0 or greater required.\" exit 1 fi # Start etcd ETCD_DIR=$(mktemp -d -t test-etcd.XXXXXX) etcd -name test -data-dir ${ETCD_DIR} -addr ${host}:${port} >/dev/null 2>/dev/null & kube::log::usage \"etcd -data-dir ${ETCD_DIR} -addr ${host}:${port} >/dev/null 2>/dev/null\" etcd -data-dir ${ETCD_DIR} -addr ${host}:${port} >/dev/null 2>/dev/null & ETCD_PID=$! kube::util::wait_for_url \"http://${host}:${port}/v2/keys/\" \"etcd: \" echo \"Waiting for etcd to come up.\" while true; do if curl -L http://127.0.0.1:4001/v2/keys/test -XPUT -d value=\"test\"; then break fi done kube::util::wait_for_url \"http://${host}:${port}/v2/keys/test\" \"etcd: \" } kube::etcd::cleanup() {", "commid": "kubernetes_pr_3960"}], "negative_passages": []} {"query_id": "q-en-kubernetes-48b6be9a789fe76ef913476cd055a1bb3da8978c4034069a469636df6a1d0695", "query": "writes: As we start looking into etcd and event memory pressure [], we should look into switching over to etcd 2.0 which was released today.\ncc/\n+1 On Jan 28, 2015 5:29 PM, \"Dawn Chen\" wrote:\nMy long-running cluster tests resulted in a hard-crash on etcd: We should move to 2.0 ASAP in the hopes of (a) fixing the issue and if not (b) prioritizing the fix since we'll be complaining about head. Abhi, is this something you can pick up and drive to land in O(days) please?\nCalling this P0 as it blocks further long-running cluster stability testing.\nI will do this today. , Alex Mohr wrote:", "positive_passages": [{"docid": "doc-en-kubernetes-7cfd15081eef595237856b11fd1d8b4236ac17462447857f3ecdac70ffe3764a", "text": "KUBE_ROOT=$(dirname \"${BASH_SOURCE}\")/../.. ETCD_VERSION=${ETCD_VERSION:-v0.4.6} ETCD_VERSION=${ETCD_VERSION:-v2.0.0} cd \"${KUBE_ROOT}/third_party\" curl -sL https://github.com/coreos/etcd/releases/download/${ETCD_VERSION}/etcd-${ETCD_VERSION}-linux-amd64.tar.gz ", "commid": "kubernetes_pr_3960"}], "negative_passages": []} {"query_id": "q-en-kubernetes-618a17ee1a2bfcaa30e720bd6c59c3ac91ee900494c2c92ac1b2427757bb09d9", "query": "reported a variety of failure modes in .\n+1\nI figured I'd add to this issue rather than create a new one. I attempted to create a vagrant cluster today (dev tree is up to date) with 1 Master and 1 Minion and the cluster creation is stuck in the 'Validating master' loop ( for over 10 minutes in the validating loop ). salt-master and salt-minion logs seem to be empty ( they don't appear to be running, in fact ).\nI still have the cluster running in case anybody wants additional information.\n/cc please let me know if there is any other information about my vagrant setup that I need to post here.\nI have been working to get more stability in this environment over last few days. Can you try again from HEAD after this merges:\nCan you also verify that you built a release prior to running cluster/Kube-up?\nyes, I started from scratch and built a release before running cluster/kube- - cluster/kube- seems to fail fairly early in case there is no release build available.\nIt looks like got merged. I'll try again and get back to you.\nThe vagrant cluster seems to come up ( completes master and minion validation ). However, there is new credentials related error that is showing up at the end of running cluster/kube- :\nI'll add one more problem I've encountered. Runing from head:\nNew to me. Will investigate more tomorrow. Sent from my iPhone\nAlso new to me :-) - will see if I can reproduce either. Sent from my iPhone\nYour credendials and certificates are not working with new environment. Just delete your .kubernetesvagrant* files and recreate the cluster. It would be nice to automate this task, if you tear down the cluster.\nI am able to reproduce the 502 Bad Gateway error and am working on determining what changed in order to provide a resolution over the last couple days. Please assign this issue to me.\nThe issue appears to me that the kube-apiserver is not getting provisioned by salt correctly. Parsing Salt logs to determine why.\nLooks like the file is not getting laid down by salt for some reason ;-)\nThe Jinja template was changed as part of identifying the cluster. Fixing up now, and will send a PR to close this issue.\nthat was it, thanks!", "positive_passages": [{"docid": "doc-en-kubernetes-6e3c74340fb0c64c805550bd7e7f70304bcbe8e68ef36a554bf030df27c671f3", "text": "# The IP of the master export MASTER_IP=\"10.245.1.2\" export INSTANCE_PREFIX=kubernetes export INSTANCE_PREFIX=\"kubernetes\" export MASTER_NAME=\"${INSTANCE_PREFIX}-master\" # Map out the IPs, names and container subnets of each minion", "commid": "kubernetes_pr_4903"}], "negative_passages": []} {"query_id": "q-en-kubernetes-618a17ee1a2bfcaa30e720bd6c59c3ac91ee900494c2c92ac1b2427757bb09d9", "query": "reported a variety of failure modes in .\n+1\nI figured I'd add to this issue rather than create a new one. I attempted to create a vagrant cluster today (dev tree is up to date) with 1 Master and 1 Minion and the cluster creation is stuck in the 'Validating master' loop ( for over 10 minutes in the validating loop ). salt-master and salt-minion logs seem to be empty ( they don't appear to be running, in fact ).\nI still have the cluster running in case anybody wants additional information.\n/cc please let me know if there is any other information about my vagrant setup that I need to post here.\nI have been working to get more stability in this environment over last few days. Can you try again from HEAD after this merges:\nCan you also verify that you built a release prior to running cluster/Kube-up?\nyes, I started from scratch and built a release before running cluster/kube- - cluster/kube- seems to fail fairly early in case there is no release build available.\nIt looks like got merged. I'll try again and get back to you.\nThe vagrant cluster seems to come up ( completes master and minion validation ). However, there is new credentials related error that is showing up at the end of running cluster/kube- :\nI'll add one more problem I've encountered. Runing from head:\nNew to me. Will investigate more tomorrow. Sent from my iPhone\nAlso new to me :-) - will see if I can reproduce either. Sent from my iPhone\nYour credendials and certificates are not working with new environment. Just delete your .kubernetesvagrant* files and recreate the cluster. It would be nice to automate this task, if you tear down the cluster.\nI am able to reproduce the 502 Bad Gateway error and am working on determining what changed in order to provide a resolution over the last couple days. Please assign this issue to me.\nThe issue appears to me that the kube-apiserver is not getting provisioned by salt correctly. Parsing Salt logs to determine why.\nLooks like the file is not getting laid down by salt for some reason ;-)\nThe Jinja template was changed as part of identifying the cluster. Fixing up now, and will send a PR to close this issue.\nthat was it, thanks!", "positive_passages": [{"docid": "doc-en-kubernetes-d5c5378f24fb46bd45f685f67eaff4fbed9650af9553afa78f5caa2dd351db0c", "text": "dns_replicas: '$(echo \"$DNS_REPLICAS\" | sed -e \"s/'/''/g\")' dns_server: '$(echo \"$DNS_SERVER_IP\" | sed -e \"s/'/''/g\")' dns_domain: '$(echo \"$DNS_DOMAIN\" | sed -e \"s/'/''/g\")' instance_prefix: '$(echo \"$INSTANCE_PREFIX\" | sed -e \"s/'/''/g\")' EOF # Configure the salt-master", "commid": "kubernetes_pr_4903"}], "negative_passages": []} {"query_id": "q-en-kubernetes-1898b360a97767b2693ea582431a164ade42bd7fbf892fdd653b38d513260c90", "query": "contains function get-password. This uses a template which gets the current context. But a user who has not yet used the kubectl contexts feature has no context set. This causes a nil pointer deref while evaluating the template.\nI think kube-up creates the needed entries in the .kubeconfig file. But it calls get-password before it does that. Need to allow the kubectl config view command to fail.\nokay, I have a fix.", "positive_passages": [{"docid": "doc-en-kubernetes-a7467afaa396090f75d283b1bdff2292a8e93c42902eb7951ab7c56a69e5bdaf", "text": "# KUBE_PASSWORD function get-password { # go template to extract the auth-path of the current-context user local template='{{$ctx := index . \"current-context\"}}{{$user := index . \"contexts\" $ctx \"user\"}}{{index . \"users\" $user \"auth-path\"}}' local template='{{with $ctx := index . \"current-context\"}}{{$user := index . \"contexts\" $ctx \"user\"}}{{index . \"users\" $user \"auth-path\"}}{{end}}' local file=$(\"${KUBE_ROOT}/cluster/kubectl.sh\" config view -o template --template=\"${template}\") if [[ -r \"$file\" ]]; then if [[ ! -z \"$file\" && -r \"$file\" ]]; then KUBE_USER=$(cat \"$file\" | python -c 'import json,sys;print json.load(sys.stdin)[\"User\"]') KUBE_PASSWORD=$(cat \"$file\" | python -c 'import json,sys;print json.load(sys.stdin)[\"Password\"]') return", "commid": "kubernetes_pr_4353"}], "negative_passages": []} {"query_id": "q-en-kubernetes-2d8db3db60a856068241611a4e608d0c0eb92592804309234e8facba95234683", "query": "I have been following the instructions here: to try to explore the kube UI. by executing: I get the following which shows it successfully started: When I try to access the page I get a 404:\ncould you try running ? I believe is a typo of\nAlso, try accessing instead.\nthanks", "positive_passages": [{"docid": "doc-en-kubernetes-9239655b41b76696f68403bf206d47e9ae1a33eb75da975d968e4ae07d6e8c28", "text": "Start the server: ```sh cluster/kubectl.sh proxy -www=$PWD/www cluster/kubectl.sh proxy --www=$PWD/www ``` The UI should now be running on [localhost](http://localhost:8001/static/index.html#/groups//selector)", "commid": "kubernetes_pr_4611"}], "negative_passages": []} {"query_id": "q-en-kubernetes-33e3eb0cb35f85a754b3ee3944c38e72c05cb6c9cd3f09473af5448c1ac03388", "query": "I tried to create a pod without containers to see if it's supported or not. (does a pod without containers have a meaning in k8s?) I was getting \"conflict 409 already exists\" error although I used a new pod with a new name/id, so it's not clear whether it's supported or not, and whether this error is a mistake (409 instead of 400)\nWhat is the reason to create pod without specifiing image / creating container? Please provide a use case when such functionality is necessry.\n- to me it's not necessary. However if it's not supported, a validation should prevent it not to allow corrupted entities. And if it is supported, it will be good to understand why. For instance, service with nil selector is supported, though it might not make sense to some users. see this:\nYes, it should be allowed, and I'm pretty sure it used to work. Could you please post your object schema here or somewhere?\nwhat do you mean by object schema? an example json of such pod or something broader? and btw what meaning a pod has without containers?\ncc\nI'm fine with banning this for now. The use cases where it would be useful (e.g., reserving resources, staging deployment, prefetching images, ...) aren't currently well supported. When we add the features that could take advantage of this (e.g., pod-level resources, additional of containers during update, container volumes, pod/volume init hooks), then we can make it legal again at that point.\nin that case, a clear error message coming from REST would be in place when a pod with no containers is recieved by the server :)\nI'll look into this. AFAIU we want to return meaningful error if user specifies pod with no containers. Is my understanding correct?\nCorrect. We should do it in validateContainers:\ncc\nshould we document this somewhere (e.g. ). Validation is not versioned, so any change would apply to all existing APIs.\nYes, it should be documented, in the field descriptions, such as:\nI'll send a PR for this.\ncc", "positive_passages": [{"docid": "doc-en-kubernetes-6484546cff8dc36e27c9001cd1d1cb7e711f9fdc498f6376049f601a5982a344", "text": "// PodSpec is a description of a pod type PodSpec struct { Volumes []Volume `json:\"volumes\"` Volumes []Volume `json:\"volumes\"` // Required: there must be at least one container in a pod. Containers []Container `json:\"containers\"` RestartPolicy RestartPolicy `json:\"restartPolicy,omitempty\"` // Required: Set DNS policy.", "commid": "kubernetes_pr_5703"}], "negative_passages": []} {"query_id": "q-en-kubernetes-33e3eb0cb35f85a754b3ee3944c38e72c05cb6c9cd3f09473af5448c1ac03388", "query": "I tried to create a pod without containers to see if it's supported or not. (does a pod without containers have a meaning in k8s?) I was getting \"conflict 409 already exists\" error although I used a new pod with a new name/id, so it's not clear whether it's supported or not, and whether this error is a mistake (409 instead of 400)\nWhat is the reason to create pod without specifiing image / creating container? Please provide a use case when such functionality is necessry.\n- to me it's not necessary. However if it's not supported, a validation should prevent it not to allow corrupted entities. And if it is supported, it will be good to understand why. For instance, service with nil selector is supported, though it might not make sense to some users. see this:\nYes, it should be allowed, and I'm pretty sure it used to work. Could you please post your object schema here or somewhere?\nwhat do you mean by object schema? an example json of such pod or something broader? and btw what meaning a pod has without containers?\ncc\nI'm fine with banning this for now. The use cases where it would be useful (e.g., reserving resources, staging deployment, prefetching images, ...) aren't currently well supported. When we add the features that could take advantage of this (e.g., pod-level resources, additional of containers during update, container volumes, pod/volume init hooks), then we can make it legal again at that point.\nin that case, a clear error message coming from REST would be in place when a pod with no containers is recieved by the server :)\nI'll look into this. AFAIU we want to return meaningful error if user specifies pod with no containers. Is my understanding correct?\nCorrect. We should do it in validateContainers:\ncc\nshould we document this somewhere (e.g. ). Validation is not versioned, so any change would apply to all existing APIs.\nYes, it should be documented, in the field descriptions, such as:\nI'll send a PR for this.\ncc", "positive_passages": [{"docid": "doc-en-kubernetes-85ed6f05622e5d94e6078804248e939e6930f1133f0ffced90b92effd011e26e", "text": "// PodSpec is a description of a pod type PodSpec struct { Volumes []Volume `json:\"volumes\" description:\"list of volumes that can be mounted by containers belonging to the pod\"` Containers []Container `json:\"containers\" description:\"list of containers belonging to the pod; containers cannot currently be added or removed\"` Volumes []Volume `json:\"volumes\" description:\"list of volumes that can be mounted by containers belonging to the pod\"` // Required: there must be at least one container in a pod. Containers []Container `json:\"containers\" description:\"list of containers belonging to the pod; containers cannot currently be added or removed; there must be at least one container in a Pod\"` RestartPolicy RestartPolicy `json:\"restartPolicy,omitempty\" description:\"restart policy for all containers within the pod; one of RestartPolicyAlways, RestartPolicyOnFailure, RestartPolicyNever\"` // Optional: Set DNS policy. Defaults to \"ClusterFirst\" DNSPolicy DNSPolicy `json:\"dnsPolicy,omitempty\" description:\"DNS policy for containers within the pod; one of 'ClusterFirst' or 'Default'\"`", "commid": "kubernetes_pr_5703"}], "negative_passages": []} {"query_id": "q-en-kubernetes-33e3eb0cb35f85a754b3ee3944c38e72c05cb6c9cd3f09473af5448c1ac03388", "query": "I tried to create a pod without containers to see if it's supported or not. (does a pod without containers have a meaning in k8s?) I was getting \"conflict 409 already exists\" error although I used a new pod with a new name/id, so it's not clear whether it's supported or not, and whether this error is a mistake (409 instead of 400)\nWhat is the reason to create pod without specifiing image / creating container? Please provide a use case when such functionality is necessry.\n- to me it's not necessary. However if it's not supported, a validation should prevent it not to allow corrupted entities. And if it is supported, it will be good to understand why. For instance, service with nil selector is supported, though it might not make sense to some users. see this:\nYes, it should be allowed, and I'm pretty sure it used to work. Could you please post your object schema here or somewhere?\nwhat do you mean by object schema? an example json of such pod or something broader? and btw what meaning a pod has without containers?\ncc\nI'm fine with banning this for now. The use cases where it would be useful (e.g., reserving resources, staging deployment, prefetching images, ...) aren't currently well supported. When we add the features that could take advantage of this (e.g., pod-level resources, additional of containers during update, container volumes, pod/volume init hooks), then we can make it legal again at that point.\nin that case, a clear error message coming from REST would be in place when a pod with no containers is recieved by the server :)\nI'll look into this. AFAIU we want to return meaningful error if user specifies pod with no containers. Is my understanding correct?\nCorrect. We should do it in validateContainers:\ncc\nshould we document this somewhere (e.g. ). Validation is not versioned, so any change would apply to all existing APIs.\nYes, it should be documented, in the field descriptions, such as:\nI'll send a PR for this.\ncc", "positive_passages": [{"docid": "doc-en-kubernetes-8f5013ede8864d4a4f033efd03c33f59b04400995c2fee31addc5a9a891f4f2d", "text": "// PodSpec is a description of a pod type PodSpec struct { Volumes []Volume `json:\"volumes\" description:\"list of volumes that can be mounted by containers belonging to the pod\"` Containers []Container `json:\"containers\" description:\"list of containers belonging to the pod; cannot be updated; containers cannot currently be added or removed\"` Volumes []Volume `json:\"volumes\" description:\"list of volumes that can be mounted by containers belonging to the pod\"` // Required: there must be at least one container in a pod. Containers []Container `json:\"containers\" description:\"list of containers belonging to the pod; cannot be updated; containers cannot currently be added or removed; there must be at least one container in a Pod\"` RestartPolicy RestartPolicy `json:\"restartPolicy,omitempty\" description:\"restart policy for all containers within the pod; one of RestartPolicyAlways, RestartPolicyOnFailure, RestartPolicyNever\"` // Optional: Set DNS policy. Defaults to \"ClusterFirst\" DNSPolicy DNSPolicy `json:\"dnsPolicy,omitempty\" description:\"DNS policy for containers within the pod; one of 'ClusterFirst' or 'Default'\"`", "commid": "kubernetes_pr_5703"}], "negative_passages": []} {"query_id": "q-en-kubernetes-8bb108261f9ea31f2e1442aeb830b4a2731925b51c70e95fc60efadb69675c81", "query": "We use capitals for Pod fields (ex: Status.Host) and camelCase for Event fields (ex: involvedObject.resourceVersion). We should be consistent. Should it be capitals? camelCase? Or should we be case insensitive?\nIMO, we should use camelCase for pod fields, to match the way it's serialized in JSON.\nI agree with The goal here is to be consistent with the API, not with the internal Go representation.\nI'd like to fix this before v1beta3 is cast in stone.\nok. Event field selectors are correct then. I will update the selectors for pods to make them camelCase.\nI am wondering if I should change this only for v1beta3 or for 1 and 2 as well? This will be a breaking change for beta1 and 2 if we decide to do that.\nv1beta3 only, though that's hard, we should reconsider. We're contemplating another \"breakage day\", and I doubt this is widely used.", "positive_passages": [{"docid": "doc-en-kubernetes-70e1511db8a028ea8c2bc85a3c0c946e9059511717d715dadfc9db9d818cb540", "text": "case \"name\": return \"name\", value, nil case \"DesiredState.Host\": return \"Status.Host\", value, nil return \"status.host\", value, nil case \"DesiredState.Status\": podStatus := PodStatus(value) var internalValue newer.PodPhase newer.Scheme.Convert(&podStatus, &internalValue) return \"Status.Phase\", string(internalValue), nil return \"status.phase\", string(internalValue), nil default: return \"\", \"\", fmt.Errorf(\"field label not supported: %s\", label) }", "commid": "kubernetes_pr_5220"}], "negative_passages": []} {"query_id": "q-en-kubernetes-8bb108261f9ea31f2e1442aeb830b4a2731925b51c70e95fc60efadb69675c81", "query": "We use capitals for Pod fields (ex: Status.Host) and camelCase for Event fields (ex: involvedObject.resourceVersion). We should be consistent. Should it be capitals? camelCase? Or should we be case insensitive?\nIMO, we should use camelCase for pod fields, to match the way it's serialized in JSON.\nI agree with The goal here is to be consistent with the API, not with the internal Go representation.\nI'd like to fix this before v1beta3 is cast in stone.\nok. Event field selectors are correct then. I will update the selectors for pods to make them camelCase.\nI am wondering if I should change this only for v1beta3 or for 1 and 2 as well? This will be a breaking change for beta1 and 2 if we decide to do that.\nv1beta3 only, though that's hard, we should reconsider. We're contemplating another \"breakage day\", and I doubt this is widely used.", "positive_passages": [{"docid": "doc-en-kubernetes-9ea43e3f33eeb3743dd7f22ee659669331a52a93e2eb10bfdfb6d516602d326a", "text": "switch label { case \"name\": fallthrough case \"Status.Phase\": case \"status.phase\": fallthrough case \"Status.Host\": case \"status.host\": return label, value, nil default: return \"\", \"\", fmt.Errorf(\"field label not supported: %s\", label)", "commid": "kubernetes_pr_5220"}], "negative_passages": []} {"query_id": "q-en-kubernetes-8bb108261f9ea31f2e1442aeb830b4a2731925b51c70e95fc60efadb69675c81", "query": "We use capitals for Pod fields (ex: Status.Host) and camelCase for Event fields (ex: involvedObject.resourceVersion). We should be consistent. Should it be capitals? camelCase? Or should we be case insensitive?\nIMO, we should use camelCase for pod fields, to match the way it's serialized in JSON.\nI agree with The goal here is to be consistent with the API, not with the internal Go representation.\nI'd like to fix this before v1beta3 is cast in stone.\nok. Event field selectors are correct then. I will update the selectors for pods to make them camelCase.\nI am wondering if I should change this only for v1beta3 or for 1 and 2 as well? This will be a breaking change for beta1 and 2 if we decide to do that.\nv1beta3 only, though that's hard, we should reconsider. We're contemplating another \"breakage day\", and I doubt this is widely used.", "positive_passages": [{"docid": "doc-en-kubernetes-13947009bc9f7b46fc829ab3204d7997466abf7c3810b798776b65c05d4e7241", "text": "label: \"label=qux\", expectedIDs: util.NewStringSet(\"qux\"), }, { field: \"Status.Phase=Failed\", field: \"status.phase=Failed\", expectedIDs: util.NewStringSet(\"baz\"), }, { field: \"Status.Host=barhost\", field: \"status.host=barhost\", expectedIDs: util.NewStringSet(\"bar\"), }, { field: \"Status.Host=\", field: \"status.host=\", expectedIDs: util.NewStringSet(\"foo\", \"baz\", \"qux\", \"zot\"), }, { field: \"Status.Host!=\", field: \"status.host!=\", expectedIDs: util.NewStringSet(\"bar\"), }, }", "commid": "kubernetes_pr_5220"}], "negative_passages": []} {"query_id": "q-en-kubernetes-8bb108261f9ea31f2e1442aeb830b4a2731925b51c70e95fc60efadb69675c81", "query": "We use capitals for Pod fields (ex: Status.Host) and camelCase for Event fields (ex: involvedObject.resourceVersion). We should be consistent. Should it be capitals? camelCase? Or should we be case insensitive?\nIMO, we should use camelCase for pod fields, to match the way it's serialized in JSON.\nI agree with The goal here is to be consistent with the API, not with the internal Go representation.\nI'd like to fix this before v1beta3 is cast in stone.\nok. Event field selectors are correct then. I will update the selectors for pods to make them camelCase.\nI am wondering if I should change this only for v1beta3 or for 1 and 2 as well? This will be a breaking change for beta1 and 2 if we decide to do that.\nv1beta3 only, though that's hard, we should reconsider. We're contemplating another \"breakage day\", and I doubt this is widely used.", "positive_passages": [{"docid": "doc-en-kubernetes-1c5c249d845c52b7f0673d42c2280fe388cae07d2186046b45c0b90b81ec0ff7", "text": "func PodToSelectableFields(pod *api.Pod) labels.Set { return labels.Set{ \"name\": pod.Name, \"Status.Phase\": string(pod.Status.Phase), \"Status.Host\": pod.Status.Host, \"status.phase\": string(pod.Status.Phase), \"status.host\": pod.Status.Host, } }", "commid": "kubernetes_pr_5220"}], "negative_passages": []} {"query_id": "q-en-kubernetes-8bb108261f9ea31f2e1442aeb830b4a2731925b51c70e95fc60efadb69675c81", "query": "We use capitals for Pod fields (ex: Status.Host) and camelCase for Event fields (ex: involvedObject.resourceVersion). We should be consistent. Should it be capitals? camelCase? Or should we be case insensitive?\nIMO, we should use camelCase for pod fields, to match the way it's serialized in JSON.\nI agree with The goal here is to be consistent with the API, not with the internal Go representation.\nI'd like to fix this before v1beta3 is cast in stone.\nok. Event field selectors are correct then. I will update the selectors for pods to make them camelCase.\nI am wondering if I should change this only for v1beta3 or for 1 and 2 as well? This will be a breaking change for beta1 and 2 if we decide to do that.\nv1beta3 only, though that's hard, we should reconsider. We're contemplating another \"breakage day\", and I doubt this is widely used.", "positive_passages": [{"docid": "doc-en-kubernetes-3cffa85d874fce7a91b30059510244f0bfd0ee6dcca9b7598659c0f0d33d1e5e", "text": "case \"v1beta1\", \"v1beta2\": return \"DesiredState.Host\" default: return \"Status.Host\" return \"status.host\" } }", "commid": "kubernetes_pr_5220"}], "negative_passages": []} {"query_id": "q-en-kubernetes-f30562564cb56eba266b9f312736a73e9a741562f4b1725bf5d9e724708ac4cc", "query": "As Satnam mentioned in PR , our Fluentd pod tends to not collect the first chunk of logs from each container. I've noticed that it seems to miss different amounts of logs depending on how quickly the container generates them, so it may be a timing issue. cc\nSee", "positive_passages": [{"docid": "doc-en-kubernetes-d974c948cd5e284562a0eb13316116d1c4c247db3126c1374594721930e1ada1", "text": "id: fluentd-to-gcp containers: - name: fluentd-gcp-container image: kubernetes/fluentd-gcp:1.0 image: kubernetes/fluentd-gcp:1.1 volumeMounts: - name: containers mountPath: /var/lib/docker/containers", "commid": "kubernetes_pr_5529"}], "negative_passages": []} {"query_id": "q-en-kubernetes-f30562564cb56eba266b9f312736a73e9a741562f4b1725bf5d9e724708ac4cc", "query": "As Satnam mentioned in PR , our Fluentd pod tends to not collect the first chunk of logs from each container. I've noticed that it seems to miss different amounts of logs depending on how quickly the container generates them, so it may be a timing issue. cc\nSee", "positive_passages": [{"docid": "doc-en-kubernetes-fa76cdaaa31be376377ee41866579e41ccf0a8555e0690328e46ba3e07efcb70", "text": ".PHONY:\tbuild push TAG = 1.0 TAG = 1.1 build: sudo docker build -t kubernetes/fluentd-gcp:$(TAG) .", "commid": "kubernetes_pr_5529"}], "negative_passages": []} {"query_id": "q-en-kubernetes-f30562564cb56eba266b9f312736a73e9a741562f4b1725bf5d9e724708ac4cc", "query": "As Satnam mentioned in PR , our Fluentd pod tends to not collect the first chunk of logs from each container. I've noticed that it seems to miss different amounts of logs depending on how quickly the container generates them, so it may be a timing issue. cc\nSee", "positive_passages": [{"docid": "doc-en-kubernetes-1a9de9bdee26d83ba471fedfe873f0807689d9d7ba9fd258368953654fa49de9", "text": "path /var/lib/docker/containers/*/*-json.log pos_file /var/lib/docker/containers/containers.log.pos time_format %Y-%m-%dT%H:%M:%S tag docker.container.* tag docker.* read_from_head true type google_cloud flush_interval 5s # Never wait longer than 5 minutes between retries.", "commid": "kubernetes_pr_5529"}], "negative_passages": []} {"query_id": "q-en-kubernetes-055ba99112a20fcfdd2bfe16fad9fecdba21385acc682dc56530d866ddb24088", "query": "Follow up from\nBumping this up to P1 given that 0.12.2 wasn't able to create a service with an external load balancer and we didn't notice before cutting the release.\nNo, to be clearer, 0.12.2 wasn't able to create a service with ELB on GKE. That was on our side entirely.\nBut it doesn't invalidate the need for this.\nThe example is using ELB.\nare you actively working on this? If not, I'd be up for doing it today.\nMaybe not actively but I know what to, however I won't be able to finish it today, so if you think you can do it today please move forward.\nWe run the e2e tests on GKE, so if we had a test for creating an ELB we would have known that the 0.12.2 release wouldn't work on GKE. I don't see how your point applies. The e2e tests are in place so that we don't have to test each feature manually to verify that it is working, and we only found out that this feature was broken on GKE after we pushed 0.12.2, hence the need for an e2e test.\nI think and you are in fact in agreement\nright now, k8petstore tests the external load balancer by curling down the amount of transactions ingested from the REST API. im playing with porting it as a e2e test now. The code in the shell script is a little raw, but you can see that it specifies a PUBLIC_IP and then curls down from it at the end ().\nthat sounds great. What I'd really like to see here though (and what is going to do early next week) is a very simple test that isolates creating a service with an ELB to verify that creating a load balancer works in isolation from a complex test of other parts of the system. This will give us a really clear signal when the test starts to fail instead of needing to debug lots of moving parts when we see a test failure.", "positive_passages": [{"docid": "doc-en-kubernetes-bc33c6c589a1b7b1392f2ca9bb3a18ec63283e63a2570d5983352667921d9e33", "text": "import ( \"fmt\" \"io/ioutil\" \"net/http\" \"sort\" \"strings\" \"time\" \"github.com/GoogleCloudPlatform/kubernetes/pkg/api\"", "commid": "kubernetes_pr_5772"}], "negative_passages": []} {"query_id": "q-en-kubernetes-055ba99112a20fcfdd2bfe16fad9fecdba21385acc682dc56530d866ddb24088", "query": "Follow up from\nBumping this up to P1 given that 0.12.2 wasn't able to create a service with an external load balancer and we didn't notice before cutting the release.\nNo, to be clearer, 0.12.2 wasn't able to create a service with ELB on GKE. That was on our side entirely.\nBut it doesn't invalidate the need for this.\nThe example is using ELB.\nare you actively working on this? If not, I'd be up for doing it today.\nMaybe not actively but I know what to, however I won't be able to finish it today, so if you think you can do it today please move forward.\nWe run the e2e tests on GKE, so if we had a test for creating an ELB we would have known that the 0.12.2 release wouldn't work on GKE. I don't see how your point applies. The e2e tests are in place so that we don't have to test each feature manually to verify that it is working, and we only found out that this feature was broken on GKE after we pushed 0.12.2, hence the need for an e2e test.\nI think and you are in fact in agreement\nright now, k8petstore tests the external load balancer by curling down the amount of transactions ingested from the REST API. im playing with porting it as a e2e test now. The code in the shell script is a little raw, but you can see that it specifies a PUBLIC_IP and then curls down from it at the end ().\nthat sounds great. What I'd really like to see here though (and what is going to do early next week) is a very simple test that isolates creating a service with an ELB to verify that creating a load balancer works in isolation from a complex test of other parts of the system. This will give us a really clear signal when the test starts to fail instead of needing to debug lots of moving parts when we see a test failure.", "positive_passages": [{"docid": "doc-en-kubernetes-792316d300c2c14b4ff41fefe9f89b3357d5505d15217c97651bddd5de293fc7", "text": "}() }, 240.0) It(\"should be able to create a functioning external load balancer\", func() { serviceName := \"external-lb-test\" ns := api.NamespaceDefault labels := map[string]string{ \"key0\": \"value0\", } service := &api.Service{ ObjectMeta: api.ObjectMeta{ Name: serviceName, }, Spec: api.ServiceSpec{ Port: 80, Selector: labels, TargetPort: util.NewIntOrStringFromInt(80), CreateExternalLoadBalancer: true, }, } By(\"cleaning up previous service \" + serviceName + \" from namespace \" + ns) c.Services(ns).Delete(serviceName) By(\"creating service \" + serviceName + \" with external load balancer in namespace \" + ns) result, err := c.Services(ns).Create(service) Expect(err).NotTo(HaveOccurred()) defer func(ns, serviceName string) { // clean up when we're done By(\"deleting service \" + serviceName + \" in namespace \" + ns) err := c.Services(ns).Delete(serviceName) Expect(err).NotTo(HaveOccurred()) }(ns, serviceName) if len(result.Spec.PublicIPs) != 1 { Failf(\"got unexpected number (%d) of public IPs for externally load balanced service: %v\", result.Spec.PublicIPs, result) } ip := result.Spec.PublicIPs[0] port := result.Spec.Port pod := &api.Pod{ TypeMeta: api.TypeMeta{ Kind: \"Pod\", APIVersion: \"v1beta1\", }, ObjectMeta: api.ObjectMeta{ Name: \"elb-test-\" + string(util.NewUUID()), Labels: labels, }, Spec: api.PodSpec{ Containers: []api.Container{ { Name: \"webserver\", Image: \"kubernetes/test-webserver\", }, }, }, } By(\"creating pod to be part of service \" + serviceName) podClient := c.Pods(api.NamespaceDefault) defer func() { By(\"deleting pod \" + pod.Name) defer GinkgoRecover() podClient.Delete(pod.Name) }() if _, err := podClient.Create(pod); err != nil { Failf(\"Failed to create pod %s: %v\", pod.Name, err) } expectNoError(waitForPodRunning(c, pod.Name)) By(\"hitting the pod through the service's external load balancer\") var resp *http.Response for t := time.Now(); time.Since(t) < 4*time.Minute; time.Sleep(5 * time.Second) { resp, err = http.Get(fmt.Sprintf(\"http://%s:%d\", ip, port)) if err == nil { break } } Expect(err).NotTo(HaveOccurred()) defer resp.Body.Close() body, err := ioutil.ReadAll(resp.Body) Expect(err).NotTo(HaveOccurred()) if resp.StatusCode != 200 { Failf(\"received non-success return status %q trying to access pod through load balancer; got body: %s\", resp.Status, string(body)) } if !strings.Contains(string(body), \"test-webserver\") { Failf(\"received response body without expected substring 'test-webserver': %s\", string(body)) } }) It(\"should correctly serve identically named services in different namespaces on different external IP addresses\", func() { serviceNames := []string{\"services-namespace-test0\"} // Could add more here, but then it takes longer. namespaces := []string{\"namespace0\", \"namespace1\"} // As above.", "commid": "kubernetes_pr_5772"}], "negative_passages": []} {"query_id": "q-en-kubernetes-8b90e794a405b333388e6df8baae871c4efe69af9084fc8c039b202f7304993d", "query": "We already \"healthcheck\" kubelet and docker periodically, but miss one for kube-proxy.\nIt would make sense to do (only bind to localhost) at the same time we add the healthchecking configuration.", "positive_passages": [{"docid": "doc-en-kubernetes-72de335dd829a82140eb7c112dc208a30920157e1177a07be64d42712c85e186", "text": "- user: root - group: root - mode: 644 /etc/monit/conf.d/kube-proxy: file: - managed - source: salt://monit/kube-proxy - user: root - group: root - mode: 644 {% endif %} monit-service: service: - running - name: monit - name: monit - watch: - pkg: monit - pkg: monit - file: /etc/monit/conf.d/* {% endif %}", "commid": "kubernetes_pr_5984"}], "negative_passages": []} {"query_id": "q-en-kubernetes-8b90e794a405b333388e6df8baae871c4efe69af9084fc8c039b202f7304993d", "query": "We already \"healthcheck\" kubelet and docker periodically, but miss one for kube-proxy.\nIt would make sense to do (only bind to localhost) at the same time we add the healthchecking configuration.", "positive_passages": [{"docid": "doc-en-kubernetes-3701f5e9bc547f0927e7e901d14d5fff36a9c803b0f591700698c66174802914", "text": " check process kube-proxy with pidfile /var/run/kube-proxy.pid group kube-proxy start program = \"/etc/init.d/kube-proxy start\" stop program = \"/etc/init.d/kube-proxy stop\" if does not exist then restart if failed port 10249 protocol HTTP request \"/healthz\" with timeout 10 seconds then restart ", "commid": "kubernetes_pr_5984"}], "negative_passages": []} {"query_id": "q-en-kubernetes-d1e28174cedbb4bdd2cadfc4485cd269e35cba0fa67cd75c639195b8163f8326", "query": "stack trace here: it's due to a shared variable:\nHi thanks for reporting the issue with a patch! Do you want to submit a PR so that you can take the credit for your fix? :)\ndone , Yu-Ju Hong wrote: 585.241.9488 (voice) 650.649.6071 (fax)\nFYI, I think the same bug may still exist in your new PR.\nThere are multiple p\nThere are multiple PRs in flight, so I don't know which one you have in mind. One of them removes probing logic from controller altogether.\nI was referring to , but I didn't read that PR closely so the bug may longer exist there. Again, just a FYI. :)", "positive_passages": [{"docid": "doc-en-kubernetes-b9dbe25ff76cefd6cbdd9391e75cf09df87912fa1e04ae8351971c113fe8ae15", "text": "// Start syncing node list from cloudprovider. if syncNodeList && nc.isRunningCloudProvider() { go util.Forever(func() { if err = nc.SyncCloudNodes(); err != nil { if err := nc.SyncCloudNodes(); err != nil { glog.Errorf(\"Error syncing cloud: %v\", err) } }, period)", "commid": "kubernetes_pr_6418"}], "negative_passages": []} {"query_id": "q-en-kubernetes-d1e28174cedbb4bdd2cadfc4485cd269e35cba0fa67cd75c639195b8163f8326", "query": "stack trace here: it's due to a shared variable:\nHi thanks for reporting the issue with a patch! Do you want to submit a PR so that you can take the credit for your fix? :)\ndone , Yu-Ju Hong wrote: 585.241.9488 (voice) 650.649.6071 (fax)\nFYI, I think the same bug may still exist in your new PR.\nThere are multiple p\nThere are multiple PRs in flight, so I don't know which one you have in mind. One of them removes probing logic from controller altogether.\nI was referring to , but I didn't read that PR closely so the bug may longer exist there. Again, just a FYI. :)", "positive_passages": [{"docid": "doc-en-kubernetes-442221afe3f5835f71768ab201c205c080345d3dfc3e3e1b33c8a72c2b868606", "text": "// Start syncing or monitoring node status. if syncNodeStatus { go util.Forever(func() { if err = nc.SyncProbedNodeStatus(); err != nil { if err := nc.SyncProbedNodeStatus(); err != nil { glog.Errorf(\"Error syncing status: %v\", err) } }, period) } else { go util.Forever(func() { if err = nc.MonitorNodeStatus(); err != nil { if err := nc.MonitorNodeStatus(); err != nil { glog.Errorf(\"Error monitoring node status: %v\", err) } }, nodeMonitorPeriod)", "commid": "kubernetes_pr_6418"}], "negative_passages": []} {"query_id": "q-en-kubernetes-1ff2227b578e0892bac06941b2b21a44da67a8fc722cc77d63c9a7ab123d81a8", "query": "Appears to deadlock or otherwise hang after a few seconds and then hit the 5-minute timeout. From start of log: followed by stack traces of zillions of goroutines that mostly report being blocked for 4+ minutes. Recent instances: https://travis- https://travis- cc\nI tried to reproduce the issue by run kubelet unittests 200 times (100 for v1beta1, and 100 for v1beta3), but failed to reproduce the issue. I poked around, and found the only difference between my local machine and test machine is that I am running go 1.4.2 and test machine is running 1.3 (both failure cases posted by are with 1.3.0). I remembered from online I saw a deadlock issue reported against old golang version, but haven't found that issue yet. I will continue trying to reproduce the issue.\nSaw this in travis runs too: https://travis-\nYes, there have been multiple failures today. Check shippable, also.\nFWIW, these are not all kubelet tests failing, but all of them have many kubelet go routines in the trace - mostly stuck in managePodLoop().\nI reran a couple of thousands time last night before I leave, and all passed. Yu-Ju just pointed out that the failure posted above by is go1.4. Shippable is in a really bad shape today, tons of different tests are failed, which should be handled separately.\nOk, with some hacky debugging message into kubelet package, even I couldn't reproduce the very failure reported here, I made some progress on understanding the issue here. Basically there are two issues: 1) The flaky tests reported here are TestServeExecInContainer. All reports posted here have the same stack trace below: 2) There are tons of goroutines, especially one created for each pod. Kubelet leak those goroutines. One can see when test initial run, there are 106 goroutines. Test itself create another 41 goroutines, but none of them are created for podWorkder.\nFWIW, the tests in were creating go routines it didn't need at all. I just removed them in\nThanks for cleanup. Now the number down to 91:\ncc/\nDid something change related to this recently? I haven't heard any flakiness reports for a long time, and now they're starting up again. Any ideas? Sent from my iPhone\nI didn't see any changes related to this feature. Actually I even couldn't reproduce it on my desktop. Please note that the comment I made above all those leftover goroutines are separate issue thought.\nThis has exposed a deadlock in spdystream. I'll work with to come up with a fix.\nUpstream issue\nCool ncdc@ ! dchen1107 by the way, I've run across a few similar bugs recently where their reproducibility hinges mainly on how the test cluster was brought up. If the test cluster comes up right, then the tests pass 100% forever. If the cluster comes up bad, the tests fail, sometimes forever. Were you rerunning your tests against the same test cluster? Or rebuilding your test cluster between every test invocation?\nThis is unittests, not e2e. We saw test flakiness through travis and shippable, but never on my desktop.\nThis will hopefully fix the deadlock: I will do a bump commit once it's merged.\nThanks for your quick fix. I will clean up goroutines after that.\nit's entirely possible there's some bugs in my exec & port forwarding code around leaking goroutines. Are you going to take an initial look to see what's sticking around?\nYes, your test leak some goroutine, but other tests too.", "positive_passages": [{"docid": "doc-en-kubernetes-64f9b86f5ec6dfefe00b1c68860c6178cc467215741af45f44f7faf5a2753ac8", "text": "}, { \"ImportPath\": \"github.com/docker/spdystream\", \"Rev\": \"e731c8f9f19ffd7e51a469a2de1580c1dfbb4fae\" \"Rev\": \"99515db39d3dad9607e0293f18152f3d59da76dc\" }, { \"ImportPath\": \"github.com/elazarl/go-bindata-assetfs\",", "commid": "kubernetes_pr_6632"}], "negative_passages": []} {"query_id": "q-en-kubernetes-1ff2227b578e0892bac06941b2b21a44da67a8fc722cc77d63c9a7ab123d81a8", "query": "Appears to deadlock or otherwise hang after a few seconds and then hit the 5-minute timeout. From start of log: followed by stack traces of zillions of goroutines that mostly report being blocked for 4+ minutes. Recent instances: https://travis- https://travis- cc\nI tried to reproduce the issue by run kubelet unittests 200 times (100 for v1beta1, and 100 for v1beta3), but failed to reproduce the issue. I poked around, and found the only difference between my local machine and test machine is that I am running go 1.4.2 and test machine is running 1.3 (both failure cases posted by are with 1.3.0). I remembered from online I saw a deadlock issue reported against old golang version, but haven't found that issue yet. I will continue trying to reproduce the issue.\nSaw this in travis runs too: https://travis-\nYes, there have been multiple failures today. Check shippable, also.\nFWIW, these are not all kubelet tests failing, but all of them have many kubelet go routines in the trace - mostly stuck in managePodLoop().\nI reran a couple of thousands time last night before I leave, and all passed. Yu-Ju just pointed out that the failure posted above by is go1.4. Shippable is in a really bad shape today, tons of different tests are failed, which should be handled separately.\nOk, with some hacky debugging message into kubelet package, even I couldn't reproduce the very failure reported here, I made some progress on understanding the issue here. Basically there are two issues: 1) The flaky tests reported here are TestServeExecInContainer. All reports posted here have the same stack trace below: 2) There are tons of goroutines, especially one created for each pod. Kubelet leak those goroutines. One can see when test initial run, there are 106 goroutines. Test itself create another 41 goroutines, but none of them are created for podWorkder.\nFWIW, the tests in were creating go routines it didn't need at all. I just removed them in\nThanks for cleanup. Now the number down to 91:\ncc/\nDid something change related to this recently? I haven't heard any flakiness reports for a long time, and now they're starting up again. Any ideas? Sent from my iPhone\nI didn't see any changes related to this feature. Actually I even couldn't reproduce it on my desktop. Please note that the comment I made above all those leftover goroutines are separate issue thought.\nThis has exposed a deadlock in spdystream. I'll work with to come up with a fix.\nUpstream issue\nCool ncdc@ ! dchen1107 by the way, I've run across a few similar bugs recently where their reproducibility hinges mainly on how the test cluster was brought up. If the test cluster comes up right, then the tests pass 100% forever. If the cluster comes up bad, the tests fail, sometimes forever. Were you rerunning your tests against the same test cluster? Or rebuilding your test cluster between every test invocation?\nThis is unittests, not e2e. We saw test flakiness through travis and shippable, but never on my desktop.\nThis will hopefully fix the deadlock: I will do a bump commit once it's merged.\nThanks for your quick fix. I will clean up goroutines after that.\nit's entirely possible there's some bugs in my exec & port forwarding code around leaking goroutines. Are you going to take an initial look to see what's sticking around?\nYes, your test leak some goroutine, but other tests too.", "positive_passages": [{"docid": "doc-en-kubernetes-053f8acab286a5f198c08967aa78b86517c62614d9910c64e78f4c53d8852bce", "text": "if timer != nil { timer.Stop() } // Start a goroutine to drain resetChan. This is needed because we've seen // some unit tests with large numbers of goroutines get into a situation // where resetChan fills up, at least 1 call to Write() is still trying to // send to resetChan, the connection gets closed, and this case statement // attempts to grab the write lock that Write() already has, causing a // deadlock. // // See https://github.com/docker/spdystream/issues/49 for more details. go func() { for _ = range resetChan { } }() i.writeLock.Lock() close(resetChan) i.resetChan = nil i.writeLock.Unlock() break Loop } }", "commid": "kubernetes_pr_6632"}], "negative_passages": []} {"query_id": "q-en-kubernetes-fd524e6826dc66b2f73439575ef5cd8d63e3e397dffa3cbe39f71d1f2888b60e", "query": "goroutine 11 [running]: net/http.(ServeMux).Handle(0xc20800a8a0, 0x9e9490, 0xd, 0x7f7d12f190a0, 0xb0e7c0) +0x244 net/http.(ServeMux).HandleFunc(0xc20800a8a0, 0x9e9490, 0xd, 0xb0e7c0) +0x6d net/http.HandleFunc(0x9e9490, 0xd, 0xb0e7c0) +0x48 \u00b7001() +0x58 created by (*SchedulerServer).Run +0x1c7\ncc", "positive_passages": [{"docid": "doc-en-kubernetes-2c22877a1660934c80a26cc59f123aad314705efbd22f89e40a359fceb84225b", "text": "client.BindClientConfigFlags(fs, &s.ClientConfig) fs.StringVar(&s.AlgorithmProvider, \"algorithm_provider\", s.AlgorithmProvider, \"The scheduling algorithm provider to use\") fs.StringVar(&s.PolicyConfigFile, \"policy_config_file\", s.PolicyConfigFile, \"File with scheduler policy configuration\") fs.BoolVar(&s.EnableProfiling, \"profiling\", false, \"Enable profiling via web interface host:port/debug/pprof/\") fs.BoolVar(&s.EnableProfiling, \"profiling\", true, \"Enable profiling via web interface host:port/debug/pprof/\") } // Run runs the specified SchedulerServer. This should never exit.", "commid": "kubernetes_pr_6623"}], "negative_passages": []} {"query_id": "q-en-kubernetes-fd524e6826dc66b2f73439575ef5cd8d63e3e397dffa3cbe39f71d1f2888b60e", "query": "goroutine 11 [running]: net/http.(ServeMux).Handle(0xc20800a8a0, 0x9e9490, 0xd, 0x7f7d12f190a0, 0xb0e7c0) +0x244 net/http.(ServeMux).HandleFunc(0xc20800a8a0, 0x9e9490, 0xd, 0xb0e7c0) +0x6d net/http.HandleFunc(0x9e9490, 0xd, 0xb0e7c0) +0x48 \u00b7001() +0x58 created by (*SchedulerServer).Run +0x1c7\ncc", "positive_passages": [{"docid": "doc-en-kubernetes-83c484a5085fa9d343f73b464d909c72a7ccd5d390861b384eb9e569aa4520bb", "text": "} go func() { mux := http.NewServeMux() if s.EnableProfiling { http.HandleFunc(\"/debug/pprof/\", pprof.Index) http.HandleFunc(\"/debug/pprof/profile\", pprof.Profile) http.HandleFunc(\"/debug/pprof/symbol\", pprof.Symbol) mux.HandleFunc(\"/debug/pprof/\", pprof.Index) mux.HandleFunc(\"/debug/pprof/profile\", pprof.Profile) mux.HandleFunc(\"/debug/pprof/symbol\", pprof.Symbol) } http.Handle(\"/metrics\", prometheus.Handler()) http.ListenAndServe(net.JoinHostPort(s.Address.String(), strconv.Itoa(s.Port)), nil) mux.Handle(\"/metrics\", prometheus.Handler()) server := &http.Server{ Addr: net.JoinHostPort(s.Address.String(), strconv.Itoa(s.Port)), Handler: mux, } glog.Fatal(server.ListenAndServe()) }() configFactory := factory.NewConfigFactory(kubeClient)", "commid": "kubernetes_pr_6623"}], "negative_passages": []} {"query_id": "q-en-kubernetes-859a8b847337ca2cf9879726f3bdb71f0db5559ee2589591ceae612e41f71a1b", "query": "There is a funny subtle behvaiour in get namespaces, in which if i call more than one time, it seems that the removal from Terminating state is accelarated. It seems this might indicate a bug, or maybe its not a bug and the eventual semantics of is a little funny sometimes. in any case, here's what we are seeing : 1) when i delete a namespace the first time usning kubectl, we get a state change to . that is correct, of course. 2) but ... if make the call a second time, then the same status line is immediately removed from . Expected behaviour : A second call to \"kubectl delete ...\" shouldn't have any effect on the timing of removal of a namespace from 'Terminating' state*... So i guess there is some \"eventual consistency\" subtle bug that is resolved by repeated calls to\nsuggests this may not be e bug, maybe its just the etcd eventual deleter thingy.\nLooks to definitely be a bug. The 2nd delete seems to have interrupted the delete process and artifacts (like events) from the namespace still exist in etcd.\ninvestigating...\noops, see the error, will submit a fix.\nI have a fix, but need a test case, will submit PR tomorrow AM", "positive_passages": [{"docid": "doc-en-kubernetes-a8e7be87469f3db4b87b353f0c5442e105958d66686a072ebeba0ec2f9c5238f", "text": "allErrs := errs.ValidationErrorList{} allErrs = append(allErrs, ValidateObjectMetaUpdate(&oldNamespace.ObjectMeta, &newNamespace.ObjectMeta).Prefix(\"metadata\")...) newNamespace.Spec = oldNamespace.Spec if newNamespace.DeletionTimestamp.IsZero() { if newNamespace.Status.Phase != api.NamespaceActive { allErrs = append(allErrs, errs.NewFieldInvalid(\"Status.Phase\", newNamespace.Status.Phase, \"A namespace may only be in active status if it does not have a deletion timestamp.\")) } } else { if newNamespace.Status.Phase != api.NamespaceTerminating { allErrs = append(allErrs, errs.NewFieldInvalid(\"Status.Phase\", newNamespace.Status.Phase, \"A namespace may only be in terminating status if it has a deletion timestamp.\")) } } return allErrs }", "commid": "kubernetes_pr_6686"}], "negative_passages": []} {"query_id": "q-en-kubernetes-859a8b847337ca2cf9879726f3bdb71f0db5559ee2589591ceae612e41f71a1b", "query": "There is a funny subtle behvaiour in get namespaces, in which if i call more than one time, it seems that the removal from Terminating state is accelarated. It seems this might indicate a bug, or maybe its not a bug and the eventual semantics of is a little funny sometimes. in any case, here's what we are seeing : 1) when i delete a namespace the first time usning kubectl, we get a state change to . that is correct, of course. 2) but ... if make the call a second time, then the same status line is immediately removed from . Expected behaviour : A second call to \"kubectl delete ...\" shouldn't have any effect on the timing of removal of a namespace from 'Terminating' state*... So i guess there is some \"eventual consistency\" subtle bug that is resolved by repeated calls to\nsuggests this may not be e bug, maybe its just the etcd eventual deleter thingy.\nLooks to definitely be a bug. The 2nd delete seems to have interrupted the delete process and artifacts (like events) from the namespace still exist in etcd.\ninvestigating...\noops, see the error, will submit a fix.\nI have a fix, but need a test case, will submit PR tomorrow AM", "positive_passages": [{"docid": "doc-en-kubernetes-e2e629b206b5725045db59a13a56bbb1e07d0387eef65c90c0f5625c5a672282", "text": "} func TestValidateNamespaceStatusUpdate(t *testing.T) { now := util.Now() tests := []struct { oldNamespace api.Namespace namespace api.Namespace valid bool }{ {api.Namespace{}, api.Namespace{}, true}, {api.Namespace{}, api.Namespace{ Status: api.NamespaceStatus{ Phase: api.NamespaceActive, }, }, true}, {api.Namespace{ ObjectMeta: api.ObjectMeta{ Name: \"foo\"}}, api.Namespace{ ObjectMeta: api.ObjectMeta{ Name: \"foo\"}, Name: \"foo\", DeletionTimestamp: &now}, Status: api.NamespaceStatus{ Phase: api.NamespaceTerminating, },", "commid": "kubernetes_pr_6686"}], "negative_passages": []} {"query_id": "q-en-kubernetes-859a8b847337ca2cf9879726f3bdb71f0db5559ee2589591ceae612e41f71a1b", "query": "There is a funny subtle behvaiour in get namespaces, in which if i call more than one time, it seems that the removal from Terminating state is accelarated. It seems this might indicate a bug, or maybe its not a bug and the eventual semantics of is a little funny sometimes. in any case, here's what we are seeing : 1) when i delete a namespace the first time usning kubectl, we get a state change to . that is correct, of course. 2) but ... if make the call a second time, then the same status line is immediately removed from . Expected behaviour : A second call to \"kubectl delete ...\" shouldn't have any effect on the timing of removal of a namespace from 'Terminating' state*... So i guess there is some \"eventual consistency\" subtle bug that is resolved by repeated calls to\nsuggests this may not be e bug, maybe its just the etcd eventual deleter thingy.\nLooks to definitely be a bug. The 2nd delete seems to have interrupted the delete process and artifacts (like events) from the namespace still exist in etcd.\ninvestigating...\noops, see the error, will submit a fix.\nI have a fix, but need a test case, will submit PR tomorrow AM", "positive_passages": [{"docid": "doc-en-kubernetes-bef27bef5c19cde72d5a3fb04023d8a16b2c68eb91d6f8f4b4b7eeaffab2c5a2", "text": "Name: \"foo\"}}, api.Namespace{ ObjectMeta: api.ObjectMeta{ Name: \"foo\"}, Status: api.NamespaceStatus{ Phase: api.NamespaceTerminating, }, }, false}, {api.Namespace{ ObjectMeta: api.ObjectMeta{ Name: \"foo\"}}, api.Namespace{ ObjectMeta: api.ObjectMeta{ Name: \"bar\"}, Status: api.NamespaceStatus{ Phase: api.NamespaceTerminating,", "commid": "kubernetes_pr_6686"}], "negative_passages": []} {"query_id": "q-en-kubernetes-859a8b847337ca2cf9879726f3bdb71f0db5559ee2589591ceae612e41f71a1b", "query": "There is a funny subtle behvaiour in get namespaces, in which if i call more than one time, it seems that the removal from Terminating state is accelarated. It seems this might indicate a bug, or maybe its not a bug and the eventual semantics of is a little funny sometimes. in any case, here's what we are seeing : 1) when i delete a namespace the first time usning kubectl, we get a state change to . that is correct, of course. 2) but ... if make the call a second time, then the same status line is immediately removed from . Expected behaviour : A second call to \"kubectl delete ...\" shouldn't have any effect on the timing of removal of a namespace from 'Terminating' state*... So i guess there is some \"eventual consistency\" subtle bug that is resolved by repeated calls to\nsuggests this may not be e bug, maybe its just the etcd eventual deleter thingy.\nLooks to definitely be a bug. The 2nd delete seems to have interrupted the delete process and artifacts (like events) from the namespace still exist in etcd.\ninvestigating...\noops, see the error, will submit a fix.\nI have a fix, but need a test case, will submit PR tomorrow AM", "positive_passages": [{"docid": "doc-en-kubernetes-4523f688787ea72a5eee2a31a0921a8a91901ec4be3d23639647dc18b5f3c47c", "text": "for i, test := range tests { test.namespace.ObjectMeta.ResourceVersion = \"1\" test.oldNamespace.ObjectMeta.ResourceVersion = \"1\" errs := ValidateNamespaceStatusUpdate(&test.oldNamespace, &test.namespace) errs := ValidateNamespaceStatusUpdate(&test.namespace, &test.oldNamespace) if test.valid && len(errs) > 0 { t.Errorf(\"%d: Unexpected error: %v\", i, errs) t.Logf(\"%#v vs %#v\", test.oldNamespace.ObjectMeta, test.namespace.ObjectMeta)", "commid": "kubernetes_pr_6686"}], "negative_passages": []} {"query_id": "q-en-kubernetes-859a8b847337ca2cf9879726f3bdb71f0db5559ee2589591ceae612e41f71a1b", "query": "There is a funny subtle behvaiour in get namespaces, in which if i call more than one time, it seems that the removal from Terminating state is accelarated. It seems this might indicate a bug, or maybe its not a bug and the eventual semantics of is a little funny sometimes. in any case, here's what we are seeing : 1) when i delete a namespace the first time usning kubectl, we get a state change to . that is correct, of course. 2) but ... if make the call a second time, then the same status line is immediately removed from . Expected behaviour : A second call to \"kubectl delete ...\" shouldn't have any effect on the timing of removal of a namespace from 'Terminating' state*... So i guess there is some \"eventual consistency\" subtle bug that is resolved by repeated calls to\nsuggests this may not be e bug, maybe its just the etcd eventual deleter thingy.\nLooks to definitely be a bug. The 2nd delete seems to have interrupted the delete process and artifacts (like events) from the namespace still exist in etcd.\ninvestigating...\noops, see the error, will submit a fix.\nI have a fix, but need a test case, will submit PR tomorrow AM", "positive_passages": [{"docid": "doc-en-kubernetes-18bb6e3dc3203f351dbdc9bcd95190486b4cd69262f5530f27a34e45ef25a378", "text": "namespace := nsObj.(*api.Namespace) // upon first request to delete, we switch the phase to start namespace termination if namespace.DeletionTimestamp == nil { if namespace.DeletionTimestamp.IsZero() { now := util.Now() namespace.DeletionTimestamp = &now namespace.Status.Phase = api.NamespaceTerminating", "commid": "kubernetes_pr_6686"}], "negative_passages": []} {"query_id": "q-en-kubernetes-859a8b847337ca2cf9879726f3bdb71f0db5559ee2589591ceae612e41f71a1b", "query": "There is a funny subtle behvaiour in get namespaces, in which if i call more than one time, it seems that the removal from Terminating state is accelarated. It seems this might indicate a bug, or maybe its not a bug and the eventual semantics of is a little funny sometimes. in any case, here's what we are seeing : 1) when i delete a namespace the first time usning kubectl, we get a state change to . that is correct, of course. 2) but ... if make the call a second time, then the same status line is immediately removed from . Expected behaviour : A second call to \"kubectl delete ...\" shouldn't have any effect on the timing of removal of a namespace from 'Terminating' state*... So i guess there is some \"eventual consistency\" subtle bug that is resolved by repeated calls to\nsuggests this may not be e bug, maybe its just the etcd eventual deleter thingy.\nLooks to definitely be a bug. The 2nd delete seems to have interrupted the delete process and artifacts (like events) from the namespace still exist in etcd.\ninvestigating...\noops, see the error, will submit a fix.\nI have a fix, but need a test case, will submit PR tomorrow AM", "positive_passages": [{"docid": "doc-en-kubernetes-b51a6f6ad150d42c3ad2c2aac15dafd3233c0e6c88413c75cde1e341a9077c5d", "text": "// prior to final deletion, we must ensure that finalizers is empty if len(namespace.Spec.Finalizers) != 0 { err = fmt.Errorf(\"Unable to delete namespace %v because finalizers is not empty %v\", namespace.Name, namespace.Spec.Finalizers) err = fmt.Errorf(\"Namespace %v termination is in progress, waiting for %v\", namespace.Name, namespace.Spec.Finalizers) return nil, err } return r.Etcd.Delete(ctx, name, nil) }", "commid": "kubernetes_pr_6686"}], "negative_passages": []} {"query_id": "q-en-kubernetes-859a8b847337ca2cf9879726f3bdb71f0db5559ee2589591ceae612e41f71a1b", "query": "There is a funny subtle behvaiour in get namespaces, in which if i call more than one time, it seems that the removal from Terminating state is accelarated. It seems this might indicate a bug, or maybe its not a bug and the eventual semantics of is a little funny sometimes. in any case, here's what we are seeing : 1) when i delete a namespace the first time usning kubectl, we get a state change to . that is correct, of course. 2) but ... if make the call a second time, then the same status line is immediately removed from . Expected behaviour : A second call to \"kubectl delete ...\" shouldn't have any effect on the timing of removal of a namespace from 'Terminating' state*... So i guess there is some \"eventual consistency\" subtle bug that is resolved by repeated calls to\nsuggests this may not be e bug, maybe its just the etcd eventual deleter thingy.\nLooks to definitely be a bug. The 2nd delete seems to have interrupted the delete process and artifacts (like events) from the namespace still exist in etcd.\ninvestigating...\noops, see the error, will submit a fix.\nI have a fix, but need a test case, will submit PR tomorrow AM", "positive_passages": [{"docid": "doc-en-kubernetes-54ae05a93453d5828d9cb0fcc62d9ce1e8af2c9df4a3b559e1cba98cfa50d57e", "text": "if err != nil { t.Fatalf(\"unexpected error: %v\", err) } } func TestDeleteNamespaceWithIncompleteFinalizers(t *testing.T) { now := util.Now() fakeEtcdClient, helper := newHelper(t) fakeEtcdClient.ChangeIndex = 1 fakeEtcdClient.Data[\"/registry/namespaces/foo\"] = tools.EtcdResponseWithError{ R: &etcd.Response{ Node: &etcd.Node{ Value: runtime.EncodeOrDie(latest.Codec, &api.Namespace{ ObjectMeta: api.ObjectMeta{ Name: \"foo\", DeletionTimestamp: &now, }, Spec: api.NamespaceSpec{ Finalizers: []api.FinalizerName{api.FinalizerKubernetes}, }, Status: api.NamespaceStatus{Phase: api.NamespaceActive}, }), ModifiedIndex: 1, CreatedIndex: 1, }, }, } storage, _, _ := NewStorage(helper) _, err := storage.Delete(api.NewDefaultContext(), \"foo\", nil) if err == nil { t.Fatalf(\"expected error: %v\", err) } } // TODO: when we add life-cycle, this will go to Terminating, and then we need to test Terminating to gone func TestDeleteNamespaceWithCompleteFinalizers(t *testing.T) { now := util.Now() fakeEtcdClient, helper := newHelper(t) fakeEtcdClient.ChangeIndex = 1 fakeEtcdClient.Data[\"/registry/namespaces/foo\"] = tools.EtcdResponseWithError{ R: &etcd.Response{ Node: &etcd.Node{ Value: runtime.EncodeOrDie(latest.Codec, &api.Namespace{ ObjectMeta: api.ObjectMeta{ Name: \"foo\", DeletionTimestamp: &now, }, Spec: api.NamespaceSpec{ Finalizers: []api.FinalizerName{}, }, Status: api.NamespaceStatus{Phase: api.NamespaceActive}, }), ModifiedIndex: 1, CreatedIndex: 1, }, }, } storage, _, _ := NewStorage(helper) _, err := storage.Delete(api.NewDefaultContext(), \"foo\", nil) if err != nil { t.Fatalf(\"unexpected error: %v\", err) } }", "commid": "kubernetes_pr_6686"}], "negative_passages": []} {"query_id": "q-en-kubernetes-859a8b847337ca2cf9879726f3bdb71f0db5559ee2589591ceae612e41f71a1b", "query": "There is a funny subtle behvaiour in get namespaces, in which if i call more than one time, it seems that the removal from Terminating state is accelarated. It seems this might indicate a bug, or maybe its not a bug and the eventual semantics of is a little funny sometimes. in any case, here's what we are seeing : 1) when i delete a namespace the first time usning kubectl, we get a state change to . that is correct, of course. 2) but ... if make the call a second time, then the same status line is immediately removed from . Expected behaviour : A second call to \"kubectl delete ...\" shouldn't have any effect on the timing of removal of a namespace from 'Terminating' state*... So i guess there is some \"eventual consistency\" subtle bug that is resolved by repeated calls to\nsuggests this may not be e bug, maybe its just the etcd eventual deleter thingy.\nLooks to definitely be a bug. The 2nd delete seems to have interrupted the delete process and artifacts (like events) from the namespace still exist in etcd.\ninvestigating...\noops, see the error, will submit a fix.\nI have a fix, but need a test case, will submit PR tomorrow AM", "positive_passages": [{"docid": "doc-en-kubernetes-1b33fed19f7ef15b640606726c76c5869145649cdb356928e732a50e9f6cfb9c", "text": "\"testing\" \"github.com/GoogleCloudPlatform/kubernetes/pkg/api\" \"github.com/GoogleCloudPlatform/kubernetes/pkg/util\" ) func TestNamespaceStrategy(t *testing.T) {", "commid": "kubernetes_pr_6686"}], "negative_passages": []} {"query_id": "q-en-kubernetes-859a8b847337ca2cf9879726f3bdb71f0db5559ee2589591ceae612e41f71a1b", "query": "There is a funny subtle behvaiour in get namespaces, in which if i call more than one time, it seems that the removal from Terminating state is accelarated. It seems this might indicate a bug, or maybe its not a bug and the eventual semantics of is a little funny sometimes. in any case, here's what we are seeing : 1) when i delete a namespace the first time usning kubectl, we get a state change to . that is correct, of course. 2) but ... if make the call a second time, then the same status line is immediately removed from . Expected behaviour : A second call to \"kubectl delete ...\" shouldn't have any effect on the timing of removal of a namespace from 'Terminating' state*... So i guess there is some \"eventual consistency\" subtle bug that is resolved by repeated calls to\nsuggests this may not be e bug, maybe its just the etcd eventual deleter thingy.\nLooks to definitely be a bug. The 2nd delete seems to have interrupted the delete process and artifacts (like events) from the namespace still exist in etcd.\ninvestigating...\noops, see the error, will submit a fix.\nI have a fix, but need a test case, will submit PR tomorrow AM", "positive_passages": [{"docid": "doc-en-kubernetes-f6f7c6384c04bab0a1dd81fa2e794f07858a9d7b47b566b9cf486eeda7169e53", "text": "if StatusStrategy.AllowCreateOnUpdate() { t.Errorf(\"Namespaces should not allow create on update\") } now := util.Now() oldNamespace := &api.Namespace{ ObjectMeta: api.ObjectMeta{Name: \"foo\", ResourceVersion: \"10\"}, Spec: api.NamespaceSpec{Finalizers: []api.FinalizerName{\"kubernetes\"}}, Status: api.NamespaceStatus{Phase: api.NamespaceActive}, } namespace := &api.Namespace{ ObjectMeta: api.ObjectMeta{Name: \"foo\", ResourceVersion: \"9\"}, ObjectMeta: api.ObjectMeta{Name: \"foo\", ResourceVersion: \"9\", DeletionTimestamp: &now}, Status: api.NamespaceStatus{Phase: api.NamespaceTerminating}, } StatusStrategy.PrepareForUpdate(namespace, oldNamespace)", "commid": "kubernetes_pr_6686"}], "negative_passages": []} {"query_id": "q-en-kubernetes-0c0bb3600e99710565f99e8be4bd669de365f6aed46898aff519521ca79dfa0d", "query": "GCE has a length limit of 63 characters for its load balancer components (target pools and forwarding rules), and AWS has a length limit of 32 characters. Our current load balancer name construction of clustername-namespace-servicename can exceed both limits, especially AWS's. We should use a shorter name construction method less likely to hit these limits.\nThanks for filing this Alex. My suggestion would be to use a simple UID, as per utils.NewUID(). We'll need to store the mapping from {clustername, namespace, servicename} -UID somewhere stable like etcd. Using anything based directly on {cluster, namespace, name}, for example a stable hash of each, has the problem that the aforementioned tuple is not unique over time. This raises the possibility of e.g. the following scenario: creates mycluster/mynamespace/myservicename, which creates external load balancer named stablehash(mycluster,mynamespace,myservicename) deletes mycluster/mynamespace/myservicename, but at the time MyCloudProvider is down, so kubernetes can't delete ELB named stablehash(mycluster,mynamespace,myservicename). Kubernetes keeps trying in the background to delete the ELB. the mean time user creates a new service named the same as her old one mycluster/mynamespace/myservicename. Kubernetes tries to create an associated ELB named stablehash(mycluster,mynamespace,myservicename), but this fails, because the ELB exists already due to 2 above. ... etc. Badness ensues.\nAssigning to Chao Xu, as this makes for a good starter bug.\nHmmmm.... can't seem to assign to caesarxuchao yet. Not sure why.\nHe has to have write privileges on the repo before issues/PRs can be assigned to him.\nBut we have many contributors with issues assigned to them that are not in the kubernetes-write group (that only has 48 members). I must be missing something. How do I grant him write privileges on the repo?\nWe do? That's not how I understand it to work: If you want to add him, go .\nhas been because the load balancer name changes if the service's UID changes (e.g. if it's deleted and recreated). Think we can change this? A hash of cluster name, namespace, and service name should get around it, but there may be better ideas as well.\nSure. I'm not confident that I can fix this quickly. If it's urgent, maybe you can assign some one more experienced to fix it? One question: I guess the ELB's name should stay unchanged as long as the \"zone+namespace+service name\" doesn't change, am I right?\nI think I'm the guilty party here. I was unaware that when we create a new service with the same name as a previously existing service, that it needed to keep the same load balancer name and IP. This seems broken to me, but I may be missing the larger context. We can quite safely roll back the change while we decide on the best alternative solution. Q , Chao Xu wrote:\nI think the real breakage is that we don't have any way to claim an LB-IP at a larger lifespan than a single service. Alex, can you go back to OP and get a sense of what's happening there to trigger this? , Quinton Hoole wrote:\nWill do. Even if the name is kept consistent as it was before, with the way everything currently works the IP won't be consistent.\nyeah, I'm not positive that the name has to be consistent. What we need is a way to claim the same public IP address, and we already have that (at least for GCE) Assigning to me, and I'll validate the length requirements, make sure public ip assignment works for AWS and close out this issue.\nNote there are two separate things here: (1) a purely-name-based construction can exceed length limits given a single service and (2) some form of across-service consistency is apparently needed. (1) is trivially solvable by either always using a hash or alternatively using a two-stage process where you generate a desired name and only hashify if it's longer than the length limits by truncating it and appending a hash of said name.\nAWS's load balancer hasn't been merged yet, so there's not much to check out there. I'm going to be reducing the churn of load balancers later today, which should help prevent IP address changes for services that are modified.\nI think we need to be able to figure out which Service an LB is attached to from the Service, so that LBs can't get orphaned. regarding being able to claim an LB IP. I don't think that is covered in revamp... , Brendan Burns wrote:\nI think that we need to roll back and think this issue through a bit better. See comment on that PR.", "positive_passages": [{"docid": "doc-en-kubernetes-221c64b381a3e63c9a8504c0237f740ee33c62c5df53a0e58a4646ceb83c7e2a", "text": "package gce_cloud import ( \"crypto/md5\" \"fmt\" \"io\" \"io/ioutil\"", "commid": "kubernetes_pr_7609"}], "negative_passages": []} {"query_id": "q-en-kubernetes-0c0bb3600e99710565f99e8be4bd669de365f6aed46898aff519521ca79dfa0d", "query": "GCE has a length limit of 63 characters for its load balancer components (target pools and forwarding rules), and AWS has a length limit of 32 characters. Our current load balancer name construction of clustername-namespace-servicename can exceed both limits, especially AWS's. We should use a shorter name construction method less likely to hit these limits.\nThanks for filing this Alex. My suggestion would be to use a simple UID, as per utils.NewUID(). We'll need to store the mapping from {clustername, namespace, servicename} -UID somewhere stable like etcd. Using anything based directly on {cluster, namespace, name}, for example a stable hash of each, has the problem that the aforementioned tuple is not unique over time. This raises the possibility of e.g. the following scenario: creates mycluster/mynamespace/myservicename, which creates external load balancer named stablehash(mycluster,mynamespace,myservicename) deletes mycluster/mynamespace/myservicename, but at the time MyCloudProvider is down, so kubernetes can't delete ELB named stablehash(mycluster,mynamespace,myservicename). Kubernetes keeps trying in the background to delete the ELB. the mean time user creates a new service named the same as her old one mycluster/mynamespace/myservicename. Kubernetes tries to create an associated ELB named stablehash(mycluster,mynamespace,myservicename), but this fails, because the ELB exists already due to 2 above. ... etc. Badness ensues.\nAssigning to Chao Xu, as this makes for a good starter bug.\nHmmmm.... can't seem to assign to caesarxuchao yet. Not sure why.\nHe has to have write privileges on the repo before issues/PRs can be assigned to him.\nBut we have many contributors with issues assigned to them that are not in the kubernetes-write group (that only has 48 members). I must be missing something. How do I grant him write privileges on the repo?\nWe do? That's not how I understand it to work: If you want to add him, go .\nhas been because the load balancer name changes if the service's UID changes (e.g. if it's deleted and recreated). Think we can change this? A hash of cluster name, namespace, and service name should get around it, but there may be better ideas as well.\nSure. I'm not confident that I can fix this quickly. If it's urgent, maybe you can assign some one more experienced to fix it? One question: I guess the ELB's name should stay unchanged as long as the \"zone+namespace+service name\" doesn't change, am I right?\nI think I'm the guilty party here. I was unaware that when we create a new service with the same name as a previously existing service, that it needed to keep the same load balancer name and IP. This seems broken to me, but I may be missing the larger context. We can quite safely roll back the change while we decide on the best alternative solution. Q , Chao Xu wrote:\nI think the real breakage is that we don't have any way to claim an LB-IP at a larger lifespan than a single service. Alex, can you go back to OP and get a sense of what's happening there to trigger this? , Quinton Hoole wrote:\nWill do. Even if the name is kept consistent as it was before, with the way everything currently works the IP won't be consistent.\nyeah, I'm not positive that the name has to be consistent. What we need is a way to claim the same public IP address, and we already have that (at least for GCE) Assigning to me, and I'll validate the length requirements, make sure public ip assignment works for AWS and close out this issue.\nNote there are two separate things here: (1) a purely-name-based construction can exceed length limits given a single service and (2) some form of across-service consistency is apparently needed. (1) is trivially solvable by either always using a hash or alternatively using a two-stage process where you generate a desired name and only hashify if it's longer than the length limits by truncating it and appending a hash of said name.\nAWS's load balancer hasn't been merged yet, so there's not much to check out there. I'm going to be reducing the churn of load balancers later today, which should help prevent IP address changes for services that are modified.\nI think we need to be able to figure out which Service an LB is attached to from the Service, so that LBs can't get orphaned. regarding being able to claim an LB IP. I don't think that is covered in revamp... , Brendan Burns wrote:\nI think that we need to roll back and think this issue through a bit better. See comment on that PR.", "positive_passages": [{"docid": "doc-en-kubernetes-7c38531342edb56a5c8d7f0af677b0f3de4b3de5c422f21ebc94dfdcff3de8f3", "text": "\"google.golang.org/cloud/compute/metadata\" ) const LOAD_BALANCER_NAME_MAX_LENGTH = 63 // GCECloud is an implementation of Interface, TCPLoadBalancer and Instances for Google Compute Engine. type GCECloud struct { service *compute.Service", "commid": "kubernetes_pr_7609"}], "negative_passages": []} {"query_id": "q-en-kubernetes-0c0bb3600e99710565f99e8be4bd669de365f6aed46898aff519521ca79dfa0d", "query": "GCE has a length limit of 63 characters for its load balancer components (target pools and forwarding rules), and AWS has a length limit of 32 characters. Our current load balancer name construction of clustername-namespace-servicename can exceed both limits, especially AWS's. We should use a shorter name construction method less likely to hit these limits.\nThanks for filing this Alex. My suggestion would be to use a simple UID, as per utils.NewUID(). We'll need to store the mapping from {clustername, namespace, servicename} -UID somewhere stable like etcd. Using anything based directly on {cluster, namespace, name}, for example a stable hash of each, has the problem that the aforementioned tuple is not unique over time. This raises the possibility of e.g. the following scenario: creates mycluster/mynamespace/myservicename, which creates external load balancer named stablehash(mycluster,mynamespace,myservicename) deletes mycluster/mynamespace/myservicename, but at the time MyCloudProvider is down, so kubernetes can't delete ELB named stablehash(mycluster,mynamespace,myservicename). Kubernetes keeps trying in the background to delete the ELB. the mean time user creates a new service named the same as her old one mycluster/mynamespace/myservicename. Kubernetes tries to create an associated ELB named stablehash(mycluster,mynamespace,myservicename), but this fails, because the ELB exists already due to 2 above. ... etc. Badness ensues.\nAssigning to Chao Xu, as this makes for a good starter bug.\nHmmmm.... can't seem to assign to caesarxuchao yet. Not sure why.\nHe has to have write privileges on the repo before issues/PRs can be assigned to him.\nBut we have many contributors with issues assigned to them that are not in the kubernetes-write group (that only has 48 members). I must be missing something. How do I grant him write privileges on the repo?\nWe do? That's not how I understand it to work: If you want to add him, go .\nhas been because the load balancer name changes if the service's UID changes (e.g. if it's deleted and recreated). Think we can change this? A hash of cluster name, namespace, and service name should get around it, but there may be better ideas as well.\nSure. I'm not confident that I can fix this quickly. If it's urgent, maybe you can assign some one more experienced to fix it? One question: I guess the ELB's name should stay unchanged as long as the \"zone+namespace+service name\" doesn't change, am I right?\nI think I'm the guilty party here. I was unaware that when we create a new service with the same name as a previously existing service, that it needed to keep the same load balancer name and IP. This seems broken to me, but I may be missing the larger context. We can quite safely roll back the change while we decide on the best alternative solution. Q , Chao Xu wrote:\nI think the real breakage is that we don't have any way to claim an LB-IP at a larger lifespan than a single service. Alex, can you go back to OP and get a sense of what's happening there to trigger this? , Quinton Hoole wrote:\nWill do. Even if the name is kept consistent as it was before, with the way everything currently works the IP won't be consistent.\nyeah, I'm not positive that the name has to be consistent. What we need is a way to claim the same public IP address, and we already have that (at least for GCE) Assigning to me, and I'll validate the length requirements, make sure public ip assignment works for AWS and close out this issue.\nNote there are two separate things here: (1) a purely-name-based construction can exceed length limits given a single service and (2) some form of across-service consistency is apparently needed. (1) is trivially solvable by either always using a hash or alternatively using a two-stage process where you generate a desired name and only hashify if it's longer than the length limits by truncating it and appending a hash of said name.\nAWS's load balancer hasn't been merged yet, so there's not much to check out there. I'm going to be reducing the churn of load balancers later today, which should help prevent IP address changes for services that are modified.\nI think we need to be able to figure out which Service an LB is attached to from the Service, so that LBs can't get orphaned. regarding being able to claim an LB IP. I don't think that is covered in revamp... , Brendan Burns wrote:\nI think that we need to roll back and think this issue through a bit better. See comment on that PR.", "positive_passages": [{"docid": "doc-en-kubernetes-9f0f66f1e2af5ad5cd1119dd9952c9a96f14b7b79017b33715264980b7ff9712", "text": "} } func normalizeName(name string) string { // If it's short enough, just leave it. if len(name) < LOAD_BALANCER_NAME_MAX_LENGTH-6 { return name } // Hash and truncate hash := md5.Sum([]byte(name)) truncated := name[0 : LOAD_BALANCER_NAME_MAX_LENGTH-6] shortHash := hash[0:6] return fmt.Sprintf(\"%s%s\", truncated, string(shortHash)) } // CreateTCPLoadBalancer is an implementation of TCPLoadBalancer.CreateTCPLoadBalancer. // TODO(a-robinson): Don't just ignore specified IP addresses. Check if they're // owned by the project and available to be used, and use them if they are. func (gce *GCECloud) CreateTCPLoadBalancer(name, region string, externalIP net.IP, ports []int, hosts []string, affinityType api.AffinityType) (string, error) { func (gce *GCECloud) CreateTCPLoadBalancer(origName, region string, externalIP net.IP, ports []int, hosts []string, affinityType api.AffinityType) (string, error) { name := normalizeName(origName) err := gce.makeTargetPool(name, region, hosts, translateAffinityType(affinityType)) if err != nil { if !isHTTPErrorCode(err, http.StatusConflict) {", "commid": "kubernetes_pr_7609"}], "negative_passages": []} {"query_id": "q-en-kubernetes-0c0bb3600e99710565f99e8be4bd669de365f6aed46898aff519521ca79dfa0d", "query": "GCE has a length limit of 63 characters for its load balancer components (target pools and forwarding rules), and AWS has a length limit of 32 characters. Our current load balancer name construction of clustername-namespace-servicename can exceed both limits, especially AWS's. We should use a shorter name construction method less likely to hit these limits.\nThanks for filing this Alex. My suggestion would be to use a simple UID, as per utils.NewUID(). We'll need to store the mapping from {clustername, namespace, servicename} -UID somewhere stable like etcd. Using anything based directly on {cluster, namespace, name}, for example a stable hash of each, has the problem that the aforementioned tuple is not unique over time. This raises the possibility of e.g. the following scenario: creates mycluster/mynamespace/myservicename, which creates external load balancer named stablehash(mycluster,mynamespace,myservicename) deletes mycluster/mynamespace/myservicename, but at the time MyCloudProvider is down, so kubernetes can't delete ELB named stablehash(mycluster,mynamespace,myservicename). Kubernetes keeps trying in the background to delete the ELB. the mean time user creates a new service named the same as her old one mycluster/mynamespace/myservicename. Kubernetes tries to create an associated ELB named stablehash(mycluster,mynamespace,myservicename), but this fails, because the ELB exists already due to 2 above. ... etc. Badness ensues.\nAssigning to Chao Xu, as this makes for a good starter bug.\nHmmmm.... can't seem to assign to caesarxuchao yet. Not sure why.\nHe has to have write privileges on the repo before issues/PRs can be assigned to him.\nBut we have many contributors with issues assigned to them that are not in the kubernetes-write group (that only has 48 members). I must be missing something. How do I grant him write privileges on the repo?\nWe do? That's not how I understand it to work: If you want to add him, go .\nhas been because the load balancer name changes if the service's UID changes (e.g. if it's deleted and recreated). Think we can change this? A hash of cluster name, namespace, and service name should get around it, but there may be better ideas as well.\nSure. I'm not confident that I can fix this quickly. If it's urgent, maybe you can assign some one more experienced to fix it? One question: I guess the ELB's name should stay unchanged as long as the \"zone+namespace+service name\" doesn't change, am I right?\nI think I'm the guilty party here. I was unaware that when we create a new service with the same name as a previously existing service, that it needed to keep the same load balancer name and IP. This seems broken to me, but I may be missing the larger context. We can quite safely roll back the change while we decide on the best alternative solution. Q , Chao Xu wrote:\nI think the real breakage is that we don't have any way to claim an LB-IP at a larger lifespan than a single service. Alex, can you go back to OP and get a sense of what's happening there to trigger this? , Quinton Hoole wrote:\nWill do. Even if the name is kept consistent as it was before, with the way everything currently works the IP won't be consistent.\nyeah, I'm not positive that the name has to be consistent. What we need is a way to claim the same public IP address, and we already have that (at least for GCE) Assigning to me, and I'll validate the length requirements, make sure public ip assignment works for AWS and close out this issue.\nNote there are two separate things here: (1) a purely-name-based construction can exceed length limits given a single service and (2) some form of across-service consistency is apparently needed. (1) is trivially solvable by either always using a hash or alternatively using a two-stage process where you generate a desired name and only hashify if it's longer than the length limits by truncating it and appending a hash of said name.\nAWS's load balancer hasn't been merged yet, so there's not much to check out there. I'm going to be reducing the churn of load balancers later today, which should help prevent IP address changes for services that are modified.\nI think we need to be able to figure out which Service an LB is attached to from the Service, so that LBs can't get orphaned. regarding being able to claim an LB IP. I don't think that is covered in revamp... , Brendan Burns wrote:\nI think that we need to roll back and think this issue through a bit better. See comment on that PR.", "positive_passages": [{"docid": "doc-en-kubernetes-0589cd5e0961021b139354ddc49080f986af63d6dd6d63a065c8eb71ea824f30", "text": "} // UpdateTCPLoadBalancer is an implementation of TCPLoadBalancer.UpdateTCPLoadBalancer. func (gce *GCECloud) UpdateTCPLoadBalancer(name, region string, hosts []string) error { func (gce *GCECloud) UpdateTCPLoadBalancer(origName, region string, hosts []string) error { name := normalizeName(origName) pool, err := gce.service.TargetPools.Get(gce.projectID, region, name).Do() if err != nil { return err", "commid": "kubernetes_pr_7609"}], "negative_passages": []} {"query_id": "q-en-kubernetes-0c0bb3600e99710565f99e8be4bd669de365f6aed46898aff519521ca79dfa0d", "query": "GCE has a length limit of 63 characters for its load balancer components (target pools and forwarding rules), and AWS has a length limit of 32 characters. Our current load balancer name construction of clustername-namespace-servicename can exceed both limits, especially AWS's. We should use a shorter name construction method less likely to hit these limits.\nThanks for filing this Alex. My suggestion would be to use a simple UID, as per utils.NewUID(). We'll need to store the mapping from {clustername, namespace, servicename} -UID somewhere stable like etcd. Using anything based directly on {cluster, namespace, name}, for example a stable hash of each, has the problem that the aforementioned tuple is not unique over time. This raises the possibility of e.g. the following scenario: creates mycluster/mynamespace/myservicename, which creates external load balancer named stablehash(mycluster,mynamespace,myservicename) deletes mycluster/mynamespace/myservicename, but at the time MyCloudProvider is down, so kubernetes can't delete ELB named stablehash(mycluster,mynamespace,myservicename). Kubernetes keeps trying in the background to delete the ELB. the mean time user creates a new service named the same as her old one mycluster/mynamespace/myservicename. Kubernetes tries to create an associated ELB named stablehash(mycluster,mynamespace,myservicename), but this fails, because the ELB exists already due to 2 above. ... etc. Badness ensues.\nAssigning to Chao Xu, as this makes for a good starter bug.\nHmmmm.... can't seem to assign to caesarxuchao yet. Not sure why.\nHe has to have write privileges on the repo before issues/PRs can be assigned to him.\nBut we have many contributors with issues assigned to them that are not in the kubernetes-write group (that only has 48 members). I must be missing something. How do I grant him write privileges on the repo?\nWe do? That's not how I understand it to work: If you want to add him, go .\nhas been because the load balancer name changes if the service's UID changes (e.g. if it's deleted and recreated). Think we can change this? A hash of cluster name, namespace, and service name should get around it, but there may be better ideas as well.\nSure. I'm not confident that I can fix this quickly. If it's urgent, maybe you can assign some one more experienced to fix it? One question: I guess the ELB's name should stay unchanged as long as the \"zone+namespace+service name\" doesn't change, am I right?\nI think I'm the guilty party here. I was unaware that when we create a new service with the same name as a previously existing service, that it needed to keep the same load balancer name and IP. This seems broken to me, but I may be missing the larger context. We can quite safely roll back the change while we decide on the best alternative solution. Q , Chao Xu wrote:\nI think the real breakage is that we don't have any way to claim an LB-IP at a larger lifespan than a single service. Alex, can you go back to OP and get a sense of what's happening there to trigger this? , Quinton Hoole wrote:\nWill do. Even if the name is kept consistent as it was before, with the way everything currently works the IP won't be consistent.\nyeah, I'm not positive that the name has to be consistent. What we need is a way to claim the same public IP address, and we already have that (at least for GCE) Assigning to me, and I'll validate the length requirements, make sure public ip assignment works for AWS and close out this issue.\nNote there are two separate things here: (1) a purely-name-based construction can exceed length limits given a single service and (2) some form of across-service consistency is apparently needed. (1) is trivially solvable by either always using a hash or alternatively using a two-stage process where you generate a desired name and only hashify if it's longer than the length limits by truncating it and appending a hash of said name.\nAWS's load balancer hasn't been merged yet, so there's not much to check out there. I'm going to be reducing the churn of load balancers later today, which should help prevent IP address changes for services that are modified.\nI think we need to be able to figure out which Service an LB is attached to from the Service, so that LBs can't get orphaned. regarding being able to claim an LB IP. I don't think that is covered in revamp... , Brendan Burns wrote:\nI think that we need to roll back and think this issue through a bit better. See comment on that PR.", "positive_passages": [{"docid": "doc-en-kubernetes-be640f38bac80d03141af601bc3f04dc941ec4aba3d66645438ffee1b9dd4be4", "text": "} // DeleteTCPLoadBalancer is an implementation of TCPLoadBalancer.DeleteTCPLoadBalancer. func (gce *GCECloud) DeleteTCPLoadBalancer(name, region string) error { func (gce *GCECloud) DeleteTCPLoadBalancer(origName, region string) error { name := normalizeName(origName) op, err := gce.service.ForwardingRules.Delete(gce.projectID, region, name).Do() if err != nil && isHTTPErrorCode(err, http.StatusNotFound) { glog.Infof(\"Forwarding rule %s already deleted. Continuing to delete target pool.\", name)", "commid": "kubernetes_pr_7609"}], "negative_passages": []} {"query_id": "q-en-kubernetes-d40e4a6c562fc50ef12d5d4b4bfe6a402a7cb5ec672dfc45502a752fbe0291f6", "query": "Rackspace support has not ben updated in a few months. cluster/rackspace/cloud-config/minion-cloud- is using etcd_servers for kubelet and kube-proxy, but support for them has been removed since v0.9, I think. I'll remove it in a few days if no-one makes a counter-offer to get the config working with latest release.\nThere's been some work on this but it is blocked while we figure out some load-balancer design stuff. , Eric Tune wrote:\nOkay, I will remove the link to rackspace until it is fixed (). Leaving this issue open to remind us to delete it in, say, a month, if no progress is made.\ncc\nSee also\nI'd prefer to try to get things updated and keep support. However I don't want to continually become stale. every few months either.\nHi all, I want to use kubernetes on my existing rackspace infrastructure but I don't understand if rackspace suppor was removed or not. In Doc, under \"custom cloud solution\" it's linked but I wasn't able to deploy it neither with install method nor with . With i got a error (I have 16GB or RAM!). thanks in advance, Fx\nIt looks like no support has happened on Rackspace for a year.\nYou should not expect it to work, and we should delete rackspace support.\nACK.\nUnfortunately I think we'll need to drop support for Rackspace as a deployment target since I can't commit to keeping the deployment scripts up to date. Rackspace support as a cloudprovider should be maintained.\nI would be willing to support the Rackspace deployment target, if there is interest to keep it alive.\nThere would be definitely an interest from our side - sad to see that Rackspace has no interest in maintaining it\nNo problem, I will take over the maintenance. It will take me a bit of time to get caught up, but I will get it done as soon as possible.\nDo you think it would be hard to get the of LBaaS working ?\nSorry it took me a bit to respond. there is a quick update as to where I am. I was busy with some stuff, that prevented me from working on this. But that has been resolved, and should have updates by the end of next week. However what's causing the delays is the lack of a good API for Rackspace Cloud. I've used the gophercloud, it works but I'm not a fan. Additionally gophercloud doesn't have full API support (one of the things I'm not a fan). This would make it hard to implement LBaaS. I should have a better idea of your request sometime next week.\nI have seen this api docu: - is it the same as you saw? I think it should be doable with it.", "positive_passages": [{"docid": "doc-en-kubernetes-f0899a70d4d7701f5e40d31d72321d33920f5c1a816eb7c327d52b70543cc99c", "text": " #!/bin/bash # Copyright 2015 The Kubernetes Authors. # # Licensed under the Apache License, Version 2.0 (the \"License\"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an \"AS IS\" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # Create generic token following GCE standard create_token() { echo $(cat /dev/urandom | base64 | tr -d \"=+/\" | dd bs=32 count=1 2> /dev/null) } get_tokens_from_csv() { KUBE_BEARER_TOKEN=$(awk -F, '/admin/ {print $1}' ${KUBE_TEMP}/${1}_tokens.csv) KUBELET_TOKEN=$(awk -F, '/kubelet/ {print $1}' ${KUBE_TEMP}/${1}_tokens.csv) KUBE_PROXY_TOKEN=$(awk -F, '/kube_proxy/ {print $1}' ${KUBE_TEMP}/${1}_tokens.csv) } generate_admin_token() { echo \"$(create_token),admin,admin\" >> ${KUBE_TEMP}/known_tokens.csv } # Creates a csv file each time called (i.e one per kubelet). generate_kubelet_tokens() { echo \"$(create_token),kubelet,kubelet\" > ${KUBE_TEMP}/${1}_tokens.csv echo \"$(create_token),kube_proxy,kube_proxy\" >> ${KUBE_TEMP}/${1}_tokens.csv } ", "commid": "kubernetes_pr_45032"}], "negative_passages": []} {"query_id": "q-en-kubernetes-d40e4a6c562fc50ef12d5d4b4bfe6a402a7cb5ec672dfc45502a752fbe0291f6", "query": "Rackspace support has not ben updated in a few months. cluster/rackspace/cloud-config/minion-cloud- is using etcd_servers for kubelet and kube-proxy, but support for them has been removed since v0.9, I think. I'll remove it in a few days if no-one makes a counter-offer to get the config working with latest release.\nThere's been some work on this but it is blocked while we figure out some load-balancer design stuff. , Eric Tune wrote:\nOkay, I will remove the link to rackspace until it is fixed (). Leaving this issue open to remind us to delete it in, say, a month, if no progress is made.\ncc\nSee also\nI'd prefer to try to get things updated and keep support. However I don't want to continually become stale. every few months either.\nHi all, I want to use kubernetes on my existing rackspace infrastructure but I don't understand if rackspace suppor was removed or not. In Doc, under \"custom cloud solution\" it's linked but I wasn't able to deploy it neither with install method nor with . With i got a error (I have 16GB or RAM!). thanks in advance, Fx\nIt looks like no support has happened on Rackspace for a year.\nYou should not expect it to work, and we should delete rackspace support.\nACK.\nUnfortunately I think we'll need to drop support for Rackspace as a deployment target since I can't commit to keeping the deployment scripts up to date. Rackspace support as a cloudprovider should be maintained.\nI would be willing to support the Rackspace deployment target, if there is interest to keep it alive.\nThere would be definitely an interest from our side - sad to see that Rackspace has no interest in maintaining it\nNo problem, I will take over the maintenance. It will take me a bit of time to get caught up, but I will get it done as soon as possible.\nDo you think it would be hard to get the of LBaaS working ?\nSorry it took me a bit to respond. there is a quick update as to where I am. I was busy with some stuff, that prevented me from working on this. But that has been resolved, and should have updates by the end of next week. However what's causing the delays is the lack of a good API for Rackspace Cloud. I've used the gophercloud, it works but I'm not a fan. Additionally gophercloud doesn't have full API support (one of the things I'm not a fan). This would make it hard to implement LBaaS. I should have a better idea of your request sometime next week.\nI have seen this api docu: - is it the same as you saw? I think it should be doable with it.", "positive_passages": [{"docid": "doc-en-kubernetes-234fe7de88afd6e051cb253fc0993146b8838439e8d09380ad65bb1527880bfa", "text": " #cloud-config write_files: - path: /etc/cloud.conf permissions: 0600 content: | [Global] auth-url = OS_AUTH_URL username = OS_USERNAME api-key = OS_PASSWORD tenant-id = OS_TENANT_NAME region = OS_REGION_NAME [LoadBalancer] subnet-id = 11111111-1111-1111-1111-111111111111 - path: /opt/bin/git-kubernetes-nginx.sh permissions: 0755 content: | #!/bin/bash git clone https://github.com/thommay/kubernetes_nginx /opt/kubernetes_nginx /usr/bin/cp /opt/.kubernetes_auth /opt/kubernetes_nginx/.kubernetes_auth /opt/kubernetes_nginx/git-kubernetes-nginx.sh - path: /opt/bin/download-release.sh permissions: 0755 content: | #!/bin/bash # This temp URL is only good for the length of time specified at cluster creation time. # Afterward, it will result in a 403. OBJECT_URL=\"CLOUD_FILES_URL\" if [ ! -s /opt/kubernetes.tar.gz ] then echo \"Downloading release ($OBJECT_URL)\" wget \"${OBJECT_URL}\" -O /opt/kubernetes.tar.gz echo \"Unpacking release\" rm -rf /opt/kubernetes || false tar xzf /opt/kubernetes.tar.gz -C /opt/ else echo \"kubernetes release found. Skipping download.\" fi - path: /opt/.kubernetes_auth permissions: 0600 content: | KUBE_USER:KUBE_PASSWORD coreos: etcd2: discovery: https://discovery.etcd.io/DISCOVERY_ID advertise-client-urls: http://$private_ipv4:2379,http://$private_ipv4:4001 initial-advertise-peer-urls: http://$private_ipv4:2380 listen-client-urls: http://0.0.0.0:2379,http://0.0.0.0:4001 listen-peer-urls: http://$private_ipv4:2380,http://$private_ipv4:7001 flannel: ip_masq: true interface: eth2 fleet: public-ip: $private_ipv4 metadata: kubernetes_role=master update: reboot-strategy: off units: - name: etcd2.service command: start - name: fleet.service command: start - name: flanneld.service drop-ins: - name: 50-flannel.conf content: | [Unit] Requires=etcd2.service After=etcd2.service [Service] ExecStartPre=-/usr/bin/etcdctl mk /coreos.com/network/config '{\"Network\":\"KUBE_NETWORK\", \"Backend\": {\"Type\": \"host-gw\"}}' command: start - name: generate-serviceaccount-key.service command: start content: | [Unit] Description=Generate service-account key file [Service] ExecStartPre=-/usr/bin/mkdir -p /var/run/kubernetes/ ExecStart=/bin/openssl genrsa -out /var/run/kubernetes/kube-serviceaccount.key 2048 2>/dev/null RemainAfterExit=yes Type=oneshot - name: docker.service command: start drop-ins: - name: 51-docker-mirror.conf content: | [Unit] # making sure that flanneld finished startup, otherwise containers # won't land in flannel's network... Requires=flanneld.service After=flanneld.service Restart=Always - name: download-release.service command: start content: | [Unit] Description=Downloads Kubernetes Release After=network-online.target Requires=network-online.target [Service] Type=oneshot RemainAfterExit=yes ExecStart=/usr/bin/bash /opt/bin/download-release.sh - name: kube-apiserver.service command: start content: | [Unit] Description=Kubernetes API Server Documentation=https://github.com/kubernetes/kubernetes After=network-online.target Requires=network-online.target After=download-release.service Requires=download-release.service Requires=generate-serviceaccount-key.service After=generate-serviceaccount-key.service [Service] ExecStartPre=/usr/bin/ln -sf /opt/kubernetes/server/bin/kube-apiserver /opt/bin/kube-apiserver ExecStartPre=/usr/bin/mkdir -p /var/lib/kube-apiserver ExecStart=/opt/bin/kube-apiserver --address=127.0.0.1 --cloud-provider=rackspace --cloud-config=/etc/cloud.conf --etcd-servers=http://127.0.0.1:4001 --logtostderr=true --port=8080 --service-cluster-ip-range=SERVICE_CLUSTER_IP_RANGE --token-auth-file=/var/lib/kube-apiserver/known_tokens.csv --v=2 --service-account-key-file=/var/run/kubernetes/kube-serviceaccount.key --service-account-lookup=true --admission-control=NamespaceLifecycle,NamespaceAutoProvision,LimitRanger,SecurityContextDeny,ServiceAccount,DefaultTolerationSeconds,ResourceQuota Restart=always RestartSec=5 - name: apiserver-advertiser.service command: start content: | [Unit] Description=Kubernetes Apiserver Advertiser After=etcd2.service Requires=etcd2.service After=master-apiserver.service [Service] ExecStart=/bin/sh -c 'etcdctl set /corekube/apiservers/$public_ipv4 $public_ipv4' Restart=always RestartSec=120 - name: kube-controller-manager.service command: start content: | [Unit] Description=Kubernetes Controller Manager Documentation=https://github.com/kubernetes/kubernetes After=network-online.target Requires=network-online.target After=kube-apiserver.service Requires=kube-apiserver.service [Service] ExecStartPre=/usr/bin/ln -sf /opt/kubernetes/server/bin/kube-controller-manager /opt/bin/kube-controller-manager ExecStart=/opt/bin/kube-controller-manager --cloud-provider=rackspace --cloud-config=/etc/cloud.conf --logtostderr=true --master=127.0.0.1:8080 --v=2 --service-account-private-key-file=/var/run/kubernetes/kube-serviceaccount.key --root-ca-file=/run/kubernetes/apiserver.crt Restart=always RestartSec=5 - name: kube-scheduler.service command: start content: | [Unit] Description=Kubernetes Scheduler Documentation=https://github.com/kubernetes/kubernetes After=network-online.target Requires=network-online.target After=kube-apiserver.service Requires=kube-apiserver.service [Service] ExecStartPre=/usr/bin/ln -sf /opt/kubernetes/server/bin/kube-scheduler /opt/bin/kube-scheduler ExecStart=/opt/bin/kube-scheduler --logtostderr=true --master=127.0.0.1:8080 Restart=always RestartSec=5 #Running nginx service with --net=\"host\" is a necessary evil until running all k8s services in docker. - name: kubernetes-nginx.service command: start content: | [Unit] Description=Kubernetes Nginx Service After=network-online.target Requires=network-online.target After=docker.service Requires=docker.service [Service] ExecStartPre=/opt/bin/git-kubernetes-nginx.sh ExecStartPre=-/usr/bin/docker rm kubernetes_nginx ExecStart=/usr/bin/docker run --rm --net=\"host\" -p \"443:443\" -t --name \"kubernetes_nginx\" kubernetes_nginx ExecStop=/usr/bin/docker stop kubernetes_nginx Restart=always RestartSec=15 ", "commid": "kubernetes_pr_45032"}], "negative_passages": []} {"query_id": "q-en-kubernetes-d40e4a6c562fc50ef12d5d4b4bfe6a402a7cb5ec672dfc45502a752fbe0291f6", "query": "Rackspace support has not ben updated in a few months. cluster/rackspace/cloud-config/minion-cloud- is using etcd_servers for kubelet and kube-proxy, but support for them has been removed since v0.9, I think. I'll remove it in a few days if no-one makes a counter-offer to get the config working with latest release.\nThere's been some work on this but it is blocked while we figure out some load-balancer design stuff. , Eric Tune wrote:\nOkay, I will remove the link to rackspace until it is fixed (). Leaving this issue open to remind us to delete it in, say, a month, if no progress is made.\ncc\nSee also\nI'd prefer to try to get things updated and keep support. However I don't want to continually become stale. every few months either.\nHi all, I want to use kubernetes on my existing rackspace infrastructure but I don't understand if rackspace suppor was removed or not. In Doc, under \"custom cloud solution\" it's linked but I wasn't able to deploy it neither with install method nor with . With i got a error (I have 16GB or RAM!). thanks in advance, Fx\nIt looks like no support has happened on Rackspace for a year.\nYou should not expect it to work, and we should delete rackspace support.\nACK.\nUnfortunately I think we'll need to drop support for Rackspace as a deployment target since I can't commit to keeping the deployment scripts up to date. Rackspace support as a cloudprovider should be maintained.\nI would be willing to support the Rackspace deployment target, if there is interest to keep it alive.\nThere would be definitely an interest from our side - sad to see that Rackspace has no interest in maintaining it\nNo problem, I will take over the maintenance. It will take me a bit of time to get caught up, but I will get it done as soon as possible.\nDo you think it would be hard to get the of LBaaS working ?\nSorry it took me a bit to respond. there is a quick update as to where I am. I was busy with some stuff, that prevented me from working on this. But that has been resolved, and should have updates by the end of next week. However what's causing the delays is the lack of a good API for Rackspace Cloud. I've used the gophercloud, it works but I'm not a fan. Additionally gophercloud doesn't have full API support (one of the things I'm not a fan). This would make it hard to implement LBaaS. I should have a better idea of your request sometime next week.\nI have seen this api docu: - is it the same as you saw? I think it should be doable with it.", "positive_passages": [{"docid": "doc-en-kubernetes-d74319ac30c5857ce1cd653923a9913ee799a474230ff450b2e5deef7e55d6e3", "text": " #cloud-config write_files: - path: /opt/bin/regen-apiserver-list.sh permissions: 0755 content: | #!/bin/sh m=$(echo $(etcdctl ls --recursive /corekube/apiservers | cut -d/ -f4 | sort) | tr ' ' ,) mkdir -p /run/kubelet echo \"APISERVER_IPS=$m\" > /run/kubelet/apiservers.env echo \"FIRST_APISERVER_URL=https://${m%%,*}:6443\" >> /run/kubelet/apiservers.env - path: /opt/bin/download-release.sh permissions: 0755 content: | #!/bin/bash # This temp URL is only good for the length of time specified at cluster creation time. # Afterward, it will result in a 403. OBJECT_URL=\"CLOUD_FILES_URL\" if [ ! -s /opt/kubernetes.tar.gz ] then echo \"Downloading release ($OBJECT_URL)\" wget \"${OBJECT_URL}\" -O /opt/kubernetes.tar.gz echo \"Unpacking release\" rm -rf /opt/kubernetes || false tar xzf /opt/kubernetes.tar.gz -C /opt/ else echo \"kubernetes release found. Skipping download.\" fi - path: /run/config-kubelet.sh permissions: 0755 content: | #!/bin/bash -e set -x /usr/bin/mkdir -p /var/lib/kubelet cat > /var/lib/kubelet/kubeconfig << EOF apiVersion: v1 kind: Config users: - name: kubelet user: token: KUBELET_TOKEN clusters: - name: local cluster: insecure-skip-tls-verify: true contexts: - context: cluster: local user: kubelet name: service-account-context current-context: service-account-context EOF - path: /run/config-kube-proxy.sh permissions: 0755 content: | #!/bin/bash -e set -x /usr/bin/mkdir -p /var/lib/kube-proxy cat > /var/lib/kube-proxy/kubeconfig << EOF apiVersion: v1 kind: Config users: - name: kube-proxy user: token: KUBE_PROXY_TOKEN clusters: - name: local cluster: insecure-skip-tls-verify: true contexts: - context: cluster: local user: kube-proxy name: service-account-context current-context: service-account-context EOF coreos: etcd2: discovery: https://discovery.etcd.io/DISCOVERY_ID advertise-client-urls: http://$private_ipv4:2379,http://$private_ipv4:4001 initial-advertise-peer-urls: http://$private_ipv4:2380 listen-client-urls: http://0.0.0.0:2379,http://0.0.0.0:4001 listen-peer-urls: http://$private_ipv4:2380,http://$private_ipv4:7001 flannel: ip_masq: true interface: eth2 fleet: public-ip: $private_ipv4 metadata: kubernetes_role=minion update: reboot-strategy: off units: - name: etcd2.service command: start - name: fleet.service command: start - name: flanneld.service drop-ins: - name: 50-flannel.conf content: | [Unit] Requires=etcd2.service After=etcd2.service [Service] ExecStartPre=-/usr/bin/etcdctl mk /coreos.com/network/config '{\"Network\":\"KUBE_NETWORK\", \"Backend\": {\"Type\": \"host-gw\"}}' command: start - name: docker.service command: start drop-ins: - name: 51-docker-mirror.conf content: | [Unit] # making sure that flanneld finished startup, otherwise containers # won't land in flannel's network... Requires=flanneld.service After=flanneld.service Restart=Always - name: download-release.service command: start content: | [Unit] Description=Downloads Kubernetes Release After=network-online.target Requires=network-online.target [Service] Type=oneshot RemainAfterExit=yes ExecStart=/usr/bin/bash /opt/bin/download-release.sh - name: kubelet.service command: start content: | [Unit] Description=Kubernetes Kubelet Documentation=https://github.com/kubernetes/kubernetes After=network-online.target Requires=network-online.target After=docker.service Requires=docker.service After=download-release.service Requires=download-release.service After=apiserver-finder.service Requires=apiserver-finder.service [Service] EnvironmentFile=/run/kubelet/apiservers.env ExecStartPre=/run/config-kubelet.sh ExecStartPre=/usr/bin/ln -sf /opt/kubernetes/server/bin/kubelet /opt/bin/kubelet ExecStart=/opt/bin/kubelet --address=$private_ipv4 --api-servers=${FIRST_APISERVER_URL} --cluster-dns=DNS_SERVER_IP --cluster-domain=DNS_DOMAIN --healthz-bind-address=$private_ipv4 --hostname-override=$private_ipv4 --logtostderr=true --v=2 Restart=always RestartSec=5 KillMode=process - name: kube-proxy.service command: start content: | [Unit] Description=Kubernetes Proxy Documentation=https://github.com/kubernetes/kubernetes After=network-online.target Requires=network-online.target After=docker.service Requires=docker.service After=download-release.service Requires=download-release.service After=apiserver-finder.service Requires=apiserver-finder.service [Service] EnvironmentFile=/run/kubelet/apiservers.env ExecStartPre=/run/config-kube-proxy.sh ExecStartPre=/usr/bin/ln -sf /opt/kubernetes/server/bin/kube-proxy /opt/bin/kube-proxy ExecStart=/opt/bin/kube-proxy --bind-address=$private_ipv4 --kubeconfig=/var/lib/kube-proxy/kubeconfig --logtostderr=true --hostname-override=$private_ipv4 --master=${FIRST_APISERVER_URL} Restart=always RestartSec=5 - name: apiserver-finder.service command: start content: | [Unit] Description=Kubernetes Apiserver finder After=network-online.target Requires=network-online.target After=etcd2.service Requires=etcd2.service [Service] ExecStartPre=/opt/bin/regen-apiserver-list.sh ExecStart=/usr/bin/etcdctl exec-watch --recursive /corekube/apiservers -- /opt/bin/regen-apiserver-list.sh Restart=always RestartSec=30 - name: cbr0.netdev command: start content: | [NetDev] Kind=bridge Name=cbr0 - name: cbr0.network command: start content: | [Match] Name=cbr0 [Network] Address=10.240.INDEX.1/24 - name: nat.service command: start content: | [Unit] Description=NAT container->outside traffic [Service] ExecStart=/usr/sbin/iptables -t nat -A POSTROUTING -o eth0 -s 10.240.INDEX.0/24 -j MASQUERADE ExecStart=/usr/sbin/iptables -t nat -A POSTROUTING -o eth1 -s 10.240.INDEX.0/24 -j MASQUERADE RemainAfterExit=yes Type=oneshot ", "commid": "kubernetes_pr_45032"}], "negative_passages": []} {"query_id": "q-en-kubernetes-d40e4a6c562fc50ef12d5d4b4bfe6a402a7cb5ec672dfc45502a752fbe0291f6", "query": "Rackspace support has not ben updated in a few months. cluster/rackspace/cloud-config/minion-cloud- is using etcd_servers for kubelet and kube-proxy, but support for them has been removed since v0.9, I think. I'll remove it in a few days if no-one makes a counter-offer to get the config working with latest release.\nThere's been some work on this but it is blocked while we figure out some load-balancer design stuff. , Eric Tune wrote:\nOkay, I will remove the link to rackspace until it is fixed (). Leaving this issue open to remind us to delete it in, say, a month, if no progress is made.\ncc\nSee also\nI'd prefer to try to get things updated and keep support. However I don't want to continually become stale. every few months either.\nHi all, I want to use kubernetes on my existing rackspace infrastructure but I don't understand if rackspace suppor was removed or not. In Doc, under \"custom cloud solution\" it's linked but I wasn't able to deploy it neither with install method nor with . With i got a error (I have 16GB or RAM!). thanks in advance, Fx\nIt looks like no support has happened on Rackspace for a year.\nYou should not expect it to work, and we should delete rackspace support.\nACK.\nUnfortunately I think we'll need to drop support for Rackspace as a deployment target since I can't commit to keeping the deployment scripts up to date. Rackspace support as a cloudprovider should be maintained.\nI would be willing to support the Rackspace deployment target, if there is interest to keep it alive.\nThere would be definitely an interest from our side - sad to see that Rackspace has no interest in maintaining it\nNo problem, I will take over the maintenance. It will take me a bit of time to get caught up, but I will get it done as soon as possible.\nDo you think it would be hard to get the of LBaaS working ?\nSorry it took me a bit to respond. there is a quick update as to where I am. I was busy with some stuff, that prevented me from working on this. But that has been resolved, and should have updates by the end of next week. However what's causing the delays is the lack of a good API for Rackspace Cloud. I've used the gophercloud, it works but I'm not a fan. Additionally gophercloud doesn't have full API support (one of the things I'm not a fan). This would make it hard to implement LBaaS. I should have a better idea of your request sometime next week.\nI have seen this api docu: - is it the same as you saw? I think it should be doable with it.", "positive_passages": [{"docid": "doc-en-kubernetes-890b30325ea00471731c55586f129e9f88b4c5191e31ff8f1c55e9b0eb69c6e6", "text": " #!/bin/bash # Copyright 2014 The Kubernetes Authors. # # Licensed under the Apache License, Version 2.0 (the \"License\"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an \"AS IS\" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # Sane defaults for dev environments. The following variables can be easily overriden # by setting each as a ENV variable ahead of time: # KUBE_IMAGE, KUBE_MASTER_FLAVOR, KUBE_NODE_FLAVOR, NUM_NODES, NOVA_NETWORK and SSH_KEY_NAME # Shared KUBE_IMAGE=\"${KUBE_IMAGE-3eba4fbb-51da-4233-b699-8a4030561add}\" # CoreOS (Stable) SSH_KEY_NAME=\"${SSH_KEY_NAME-id_kubernetes}\" NOVA_NETWORK_LABEL=\"kubernetes-pool-net\" NOVA_NETWORK_CIDR=\"${NOVA_NETWORK-192.168.0.0/24}\" INSTANCE_PREFIX=\"kubernetes\" # Master KUBE_MASTER_FLAVOR=\"${KUBE_MASTER_FLAVOR-general1-1}\" MASTER_NAME=\"${INSTANCE_PREFIX}-master\" MASTER_TAG=\"tags=${INSTANCE_PREFIX}-master\" # Node KUBE_NODE_FLAVOR=\"${KUBE_NODE_FLAVOR-general1-2}\" NUM_NODES=\"${NUM_NODES-4}\" NODE_TAG=\"tags=${INSTANCE_PREFIX}-node\" NODE_NAMES=($(eval echo ${INSTANCE_PREFIX}-node-{1..${NUM_NODES}})) KUBE_NETWORK=\"10.240.0.0/16\" SERVICE_CLUSTER_IP_RANGE=\"10.0.0.0/16\" # formerly PORTAL_NET # Optional: Enable node logging. ENABLE_NODE_LOGGING=false LOGGING_DESTINATION=elasticsearch # Optional: When set to true, Elasticsearch and Kibana will be setup as part of the cluster bring up. ENABLE_CLUSTER_LOGGING=false ELASTICSEARCH_LOGGING_REPLICAS=1 # Optional: Cluster monitoring to setup as part of the cluster bring up: # none - No cluster monitoring setup # influxdb - Heapster, InfluxDB, and Grafana # google - Heapster, Google Cloud Monitoring, and Google Cloud Logging ENABLE_CLUSTER_MONITORING=\"${KUBE_ENABLE_CLUSTER_MONITORING:-influxdb}\" # Optional: Install cluster DNS. ENABLE_CLUSTER_DNS=\"${KUBE_ENABLE_CLUSTER_DNS:-true}\" DNS_SERVER_IP=\"10.0.0.10\" DNS_DOMAIN=\"cluster.local\" # Optional: Enable DNS horizontal autoscaler ENABLE_DNS_HORIZONTAL_AUTOSCALER=\"${KUBE_ENABLE_DNS_HORIZONTAL_AUTOSCALER:-false}\" # Optional: Install Kubernetes UI ENABLE_CLUSTER_UI=\"${KUBE_ENABLE_CLUSTER_UI:-true}\" ", "commid": "kubernetes_pr_45032"}], "negative_passages": []} {"query_id": "q-en-kubernetes-d40e4a6c562fc50ef12d5d4b4bfe6a402a7cb5ec672dfc45502a752fbe0291f6", "query": "Rackspace support has not ben updated in a few months. cluster/rackspace/cloud-config/minion-cloud- is using etcd_servers for kubelet and kube-proxy, but support for them has been removed since v0.9, I think. I'll remove it in a few days if no-one makes a counter-offer to get the config working with latest release.\nThere's been some work on this but it is blocked while we figure out some load-balancer design stuff. , Eric Tune wrote:\nOkay, I will remove the link to rackspace until it is fixed (). Leaving this issue open to remind us to delete it in, say, a month, if no progress is made.\ncc\nSee also\nI'd prefer to try to get things updated and keep support. However I don't want to continually become stale. every few months either.\nHi all, I want to use kubernetes on my existing rackspace infrastructure but I don't understand if rackspace suppor was removed or not. In Doc, under \"custom cloud solution\" it's linked but I wasn't able to deploy it neither with install method nor with . With i got a error (I have 16GB or RAM!). thanks in advance, Fx\nIt looks like no support has happened on Rackspace for a year.\nYou should not expect it to work, and we should delete rackspace support.\nACK.\nUnfortunately I think we'll need to drop support for Rackspace as a deployment target since I can't commit to keeping the deployment scripts up to date. Rackspace support as a cloudprovider should be maintained.\nI would be willing to support the Rackspace deployment target, if there is interest to keep it alive.\nThere would be definitely an interest from our side - sad to see that Rackspace has no interest in maintaining it\nNo problem, I will take over the maintenance. It will take me a bit of time to get caught up, but I will get it done as soon as possible.\nDo you think it would be hard to get the of LBaaS working ?\nSorry it took me a bit to respond. there is a quick update as to where I am. I was busy with some stuff, that prevented me from working on this. But that has been resolved, and should have updates by the end of next week. However what's causing the delays is the lack of a good API for Rackspace Cloud. I've used the gophercloud, it works but I'm not a fan. Additionally gophercloud doesn't have full API support (one of the things I'm not a fan). This would make it hard to implement LBaaS. I should have a better idea of your request sometime next week.\nI have seen this api docu: - is it the same as you saw? I think it should be doable with it.", "positive_passages": [{"docid": "doc-en-kubernetes-59137f87b20aa7e948fbc3602a242c043eac9e77f42cbb0bd09bea40bcbc4e10", "text": " #!/bin/bash # Copyright 2014 The Kubernetes Authors. # # Licensed under the Apache License, Version 2.0 (the \"License\"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an \"AS IS\" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # Bring up a Kubernetes cluster. # # If the full release name (gs:///) is passed in then we take # that directly. If not then we assume we are doing development stuff and take # the defaults in the release config. # exit on any error set -e source $(dirname $0)/../kube-util.sh echo \"Starting cluster using provider: $KUBERNETES_PROVIDER\" verify-prereqs kube-up # skipping validation for now until since machines show up as private IPs # source $(dirname $0)/validate-cluster.sh echo \"Done\" ", "commid": "kubernetes_pr_45032"}], "negative_passages": []} {"query_id": "q-en-kubernetes-d40e4a6c562fc50ef12d5d4b4bfe6a402a7cb5ec672dfc45502a752fbe0291f6", "query": "Rackspace support has not ben updated in a few months. cluster/rackspace/cloud-config/minion-cloud- is using etcd_servers for kubelet and kube-proxy, but support for them has been removed since v0.9, I think. I'll remove it in a few days if no-one makes a counter-offer to get the config working with latest release.\nThere's been some work on this but it is blocked while we figure out some load-balancer design stuff. , Eric Tune wrote:\nOkay, I will remove the link to rackspace until it is fixed (). Leaving this issue open to remind us to delete it in, say, a month, if no progress is made.\ncc\nSee also\nI'd prefer to try to get things updated and keep support. However I don't want to continually become stale. every few months either.\nHi all, I want to use kubernetes on my existing rackspace infrastructure but I don't understand if rackspace suppor was removed or not. In Doc, under \"custom cloud solution\" it's linked but I wasn't able to deploy it neither with install method nor with . With i got a error (I have 16GB or RAM!). thanks in advance, Fx\nIt looks like no support has happened on Rackspace for a year.\nYou should not expect it to work, and we should delete rackspace support.\nACK.\nUnfortunately I think we'll need to drop support for Rackspace as a deployment target since I can't commit to keeping the deployment scripts up to date. Rackspace support as a cloudprovider should be maintained.\nI would be willing to support the Rackspace deployment target, if there is interest to keep it alive.\nThere would be definitely an interest from our side - sad to see that Rackspace has no interest in maintaining it\nNo problem, I will take over the maintenance. It will take me a bit of time to get caught up, but I will get it done as soon as possible.\nDo you think it would be hard to get the of LBaaS working ?\nSorry it took me a bit to respond. there is a quick update as to where I am. I was busy with some stuff, that prevented me from working on this. But that has been resolved, and should have updates by the end of next week. However what's causing the delays is the lack of a good API for Rackspace Cloud. I've used the gophercloud, it works but I'm not a fan. Additionally gophercloud doesn't have full API support (one of the things I'm not a fan). This would make it hard to implement LBaaS. I should have a better idea of your request sometime next week.\nI have seen this api docu: - is it the same as you saw? I think it should be doable with it.", "positive_passages": [{"docid": "doc-en-kubernetes-5cc07505eede528b025ddf42d44e1c479ad850e9dfc786f78b8090e80c1bbb63", "text": " #!/bin/bash # Copyright 2014 The Kubernetes Authors. # # Licensed under the Apache License, Version 2.0 (the \"License\"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an \"AS IS\" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # A library of helper functions for deploying on Rackspace # Use the config file specified in $KUBE_CONFIG_FILE, or default to # config-default.sh. KUBE_ROOT=$(dirname \"${BASH_SOURCE}\")/../.. source $(dirname ${BASH_SOURCE})/${KUBE_CONFIG_FILE-\"config-default.sh\"} source \"${KUBE_ROOT}/cluster/common.sh\" source \"${KUBE_ROOT}/cluster/rackspace/authorization.sh\" verify-prereqs() { # Make sure that prerequisites are installed. for x in nova swiftly; do if [ \"$(which $x)\" == \"\" ]; then echo \"cluster/rackspace/util.sh: Can't find $x in PATH, please fix and retry.\" exit 1 fi done if [[ -z \"${OS_AUTH_URL-}\" ]]; then echo \"cluster/rackspace/util.sh: OS_AUTH_URL not set.\" echo -e \"texport OS_AUTH_URL=https://identity.api.rackspacecloud.com/v2.0/\" return 1 fi if [[ -z \"${OS_USERNAME-}\" ]]; then echo \"cluster/rackspace/util.sh: OS_USERNAME not set.\" echo -e \"texport OS_USERNAME=myusername\" return 1 fi if [[ -z \"${OS_PASSWORD-}\" ]]; then echo \"cluster/rackspace/util.sh: OS_PASSWORD not set.\" echo -e \"texport OS_PASSWORD=myapikey\" return 1 fi } rax-ssh-key() { if [ ! -f $HOME/.ssh/${SSH_KEY_NAME} ]; then echo \"cluster/rackspace/util.sh: Generating SSH KEY ${HOME}/.ssh/${SSH_KEY_NAME}\" ssh-keygen -f ${HOME}/.ssh/${SSH_KEY_NAME} -N '' > /dev/null fi if ! $(nova keypair-list | grep $SSH_KEY_NAME > /dev/null 2>&1); then echo \"cluster/rackspace/util.sh: Uploading key to Rackspace:\" echo -e \"tnova keypair-add ${SSH_KEY_NAME} --pub-key ${HOME}/.ssh/${SSH_KEY_NAME}.pub\" nova keypair-add ${SSH_KEY_NAME} --pub-key ${HOME}/.ssh/${SSH_KEY_NAME}.pub > /dev/null 2>&1 else echo \"cluster/rackspace/util.sh: SSH key ${SSH_KEY_NAME}.pub already uploaded\" fi } rackspace-set-vars() { CLOUDFILES_CONTAINER=\"kubernetes-releases-${OS_USERNAME}\" CONTAINER_PREFIX=${CONTAINER_PREFIX-devel/} find-release-tars } # Retrieves a tempurl from cloudfiles to make the release object publicly accessible temporarily. find-object-url() { rackspace-set-vars KUBE_TAR=${CLOUDFILES_CONTAINER}/${CONTAINER_PREFIX}/kubernetes-server-linux-amd64.tar.gz # Create temp URL good for 24 hours RELEASE_TMP_URL=$(swiftly -A ${OS_AUTH_URL} -U ${OS_USERNAME} -K ${OS_PASSWORD} tempurl GET ${KUBE_TAR} 86400 ) echo \"cluster/rackspace/util.sh: Object temp URL:\" echo -e \"t${RELEASE_TMP_URL}\" } ensure_dev_container() { SWIFTLY_CMD=\"swiftly -A ${OS_AUTH_URL} -U ${OS_USERNAME} -K ${OS_PASSWORD}\" if ! ${SWIFTLY_CMD} get ${CLOUDFILES_CONTAINER} > /dev/null 2>&1 ; then echo \"cluster/rackspace/util.sh: Container doesn't exist. Creating container ${CLOUDFILES_CONTAINER}\" ${SWIFTLY_CMD} put ${CLOUDFILES_CONTAINER} > /dev/null 2>&1 fi } # Copy kubernetes-server-linux-amd64.tar.gz to cloud files object store copy_dev_tarballs() { echo \"cluster/rackspace/util.sh: Uploading to Cloud Files\" ${SWIFTLY_CMD} put -i ${SERVER_BINARY_TAR} ${CLOUDFILES_CONTAINER}/${CONTAINER_PREFIX}/kubernetes-server-linux-amd64.tar.gz > /dev/null 2>&1 echo \"Release pushed.\" } prep_known_tokens() { for (( i=0; i<${#NODE_NAMES[@]}; i++)); do generate_kubelet_tokens ${NODE_NAMES[i]} cat ${KUBE_TEMP}/${NODE_NAMES[i]}_tokens.csv >> ${KUBE_TEMP}/known_tokens.csv done # Generate tokens for other \"service accounts\". Append to known_tokens. # # NB: If this list ever changes, this script actually has to # change to detect the existence of this file, kill any deleted # old tokens and add any new tokens (to handle the upgrade case). local -r service_accounts=(\"system:scheduler\" \"system:controller_manager\" \"system:logging\" \"system:monitoring\" \"system:dns\") for account in \"${service_accounts[@]}\"; do echo \"$(create_token),${account},${account}\" >> ${KUBE_TEMP}/known_tokens.csv done generate_admin_token } rax-boot-master() { DISCOVERY_URL=$(curl https://discovery.etcd.io/new?size=1) DISCOVERY_ID=$(echo \"${DISCOVERY_URL}\" | cut -f 4 -d /) echo \"cluster/rackspace/util.sh: etcd discovery URL: ${DISCOVERY_URL}\" # Copy cloud-config to KUBE_TEMP and work some sed magic sed -e \"s|DISCOVERY_ID|${DISCOVERY_ID}|\" -e \"s|CLOUD_FILES_URL|${RELEASE_TMP_URL//&/&}|\" -e \"s|KUBE_USER|${KUBE_USER}|\" -e \"s|KUBE_PASSWORD|${KUBE_PASSWORD}|\" -e \"s|SERVICE_CLUSTER_IP_RANGE|${SERVICE_CLUSTER_IP_RANGE}|\" -e \"s|KUBE_NETWORK|${KUBE_NETWORK}|\" -e \"s|OS_AUTH_URL|${OS_AUTH_URL}|\" -e \"s|OS_USERNAME|${OS_USERNAME}|\" -e \"s|OS_PASSWORD|${OS_PASSWORD}|\" -e \"s|OS_TENANT_NAME|${OS_TENANT_NAME}|\" -e \"s|OS_REGION_NAME|${OS_REGION_NAME}|\" $(dirname $0)/rackspace/cloud-config/master-cloud-config.yaml > $KUBE_TEMP/master-cloud-config.yaml MASTER_BOOT_CMD=\"nova boot --key-name ${SSH_KEY_NAME} --flavor ${KUBE_MASTER_FLAVOR} --image ${KUBE_IMAGE} --meta ${MASTER_TAG} --meta ETCD=${DISCOVERY_ID} --user-data ${KUBE_TEMP}/master-cloud-config.yaml --config-drive true --nic net-id=${NETWORK_UUID} ${MASTER_NAME}\" echo \"cluster/rackspace/util.sh: Booting ${MASTER_NAME} with following command:\" echo -e \"t$MASTER_BOOT_CMD\" $MASTER_BOOT_CMD } rax-boot-nodes() { cp $(dirname $0)/rackspace/cloud-config/node-cloud-config.yaml ${KUBE_TEMP}/node-cloud-config.yaml for (( i=0; i<${#NODE_NAMES[@]}; i++)); do get_tokens_from_csv ${NODE_NAMES[i]} sed -e \"s|DISCOVERY_ID|${DISCOVERY_ID}|\" -e \"s|CLOUD_FILES_URL|${RELEASE_TMP_URL//&/&}|\" -e \"s|DNS_SERVER_IP|${DNS_SERVER_IP:-}|\" -e \"s|DNS_DOMAIN|${DNS_DOMAIN:-}|\" -e \"s|ENABLE_CLUSTER_DNS|${ENABLE_CLUSTER_DNS:-false}|\" -e \"s|ENABLE_NODE_LOGGING|${ENABLE_NODE_LOGGING:-false}|\" -e \"s|INDEX|$((i + 1))|g\" -e \"s|KUBELET_TOKEN|${KUBELET_TOKEN}|\" -e \"s|KUBE_NETWORK|${KUBE_NETWORK}|\" -e \"s|KUBELET_TOKEN|${KUBELET_TOKEN}|\" -e \"s|KUBE_PROXY_TOKEN|${KUBE_PROXY_TOKEN}|\" -e \"s|LOGGING_DESTINATION|${LOGGING_DESTINATION:-}|\" $(dirname $0)/rackspace/cloud-config/node-cloud-config.yaml > $KUBE_TEMP/node-cloud-config-$(($i + 1)).yaml NODE_BOOT_CMD=\"nova boot --key-name ${SSH_KEY_NAME} --flavor ${KUBE_NODE_FLAVOR} --image ${KUBE_IMAGE} --meta ${NODE_TAG} --user-data ${KUBE_TEMP}/node-cloud-config-$(( i +1 )).yaml --config-drive true --nic net-id=${NETWORK_UUID} ${NODE_NAMES[$i]}\" echo \"cluster/rackspace/util.sh: Booting ${NODE_NAMES[$i]} with following command:\" echo -e \"t$NODE_BOOT_CMD\" $NODE_BOOT_CMD done } rax-nova-network() { if ! $(nova network-list | grep $NOVA_NETWORK_LABEL > /dev/null 2>&1); then SAFE_CIDR=$(echo $NOVA_NETWORK_CIDR | tr -d '') NETWORK_CREATE_CMD=\"nova network-create $NOVA_NETWORK_LABEL $SAFE_CIDR\" echo \"cluster/rackspace/util.sh: Creating cloud network with following command:\" echo -e \"t${NETWORK_CREATE_CMD}\" $NETWORK_CREATE_CMD else echo \"cluster/rackspace/util.sh: Using existing cloud network $NOVA_NETWORK_LABEL\" fi } detect-nodes() { KUBE_NODE_IP_ADDRESSES=() for (( i=0; i<${#NODE_NAMES[@]}; i++)); do local node_ip=$(nova show --minimal ${NODE_NAMES[$i]} | grep accessIPv4 | awk '{print $4}') echo \"cluster/rackspace/util.sh: Found ${NODE_NAMES[$i]} at ${node_ip}\" KUBE_NODE_IP_ADDRESSES+=(\"${node_ip}\") done if [ -z \"$KUBE_NODE_IP_ADDRESSES\" ]; then echo \"cluster/rackspace/util.sh: Could not detect Kubernetes node nodes. Make sure you've launched a cluster with 'kube-up.sh'\" exit 1 fi } detect-master() { KUBE_MASTER=${MASTER_NAME} echo \"Waiting for ${MASTER_NAME} IP Address.\" echo echo \" This will continually check to see if the master node has an IP address.\" echo KUBE_MASTER_IP=$(nova show $KUBE_MASTER --minimal | grep accessIPv4 | awk '{print $4}') while [ \"${KUBE_MASTER_IP-|}\" == \"|\" ]; do KUBE_MASTER_IP=$(nova show $KUBE_MASTER --minimal | grep accessIPv4 | awk '{print $4}') printf \".\" sleep 2 done echo \"${KUBE_MASTER} IP Address is ${KUBE_MASTER_IP}\" } # $1 should be the network you would like to get an IP address for detect-master-nova-net() { KUBE_MASTER=${MASTER_NAME} MASTER_IP=$(nova show $KUBE_MASTER --minimal | grep $1 | awk '{print $5}') } kube-up() { SCRIPT_DIR=$(CDPATH=\"\" cd $(dirname $0); pwd) rackspace-set-vars ensure_dev_container copy_dev_tarballs # Find the release to use. Generally it will be passed when doing a 'prod' # install and will default to the release/config.sh version when doing a # developer up. find-object-url # Create a temp directory to hold scripts that will be uploaded to master/nodes KUBE_TEMP=$(mktemp -d -t kubernetes.XXXXXX) trap \"rm -rf ${KUBE_TEMP}\" EXIT load-or-gen-kube-basicauth python2.7 $(dirname $0)/../third_party/htpasswd/htpasswd.py -b -c ${KUBE_TEMP}/htpasswd $KUBE_USER $KUBE_PASSWORD HTPASSWD=$(cat ${KUBE_TEMP}/htpasswd) rax-nova-network NETWORK_UUID=$(nova network-list | grep -i ${NOVA_NETWORK_LABEL} | awk '{print $2}') # create and upload ssh key if necessary rax-ssh-key echo \"cluster/rackspace/util.sh: Starting Cloud Servers\" prep_known_tokens rax-boot-master rax-boot-nodes detect-master # TODO look for a better way to get the known_tokens to the master. This is needed over file injection since the files were too large on a 4 node cluster. $(scp -o StrictHostKeyChecking=no -i ~/.ssh/${SSH_KEY_NAME} ${KUBE_TEMP}/known_tokens.csv core@${KUBE_MASTER_IP}:/home/core/known_tokens.csv) $(sleep 2) $(ssh -o StrictHostKeyChecking=no -i ~/.ssh/${SSH_KEY_NAME} core@${KUBE_MASTER_IP} sudo /usr/bin/mkdir -p /var/lib/kube-apiserver) $(ssh -o StrictHostKeyChecking=no -i ~/.ssh/${SSH_KEY_NAME} core@${KUBE_MASTER_IP} sudo mv /home/core/known_tokens.csv /var/lib/kube-apiserver/known_tokens.csv) $(ssh -o StrictHostKeyChecking=no -i ~/.ssh/${SSH_KEY_NAME} core@${KUBE_MASTER_IP} sudo chown root:root /var/lib/kube-apiserver/known_tokens.csv) $(ssh -o StrictHostKeyChecking=no -i ~/.ssh/${SSH_KEY_NAME} core@${KUBE_MASTER_IP} sudo systemctl restart kube-apiserver) FAIL=0 for job in `jobs -p` do wait $job || let \"FAIL+=1\" done if (( $FAIL != 0 )); then echo \"${FAIL} commands failed. Exiting.\" exit 2 fi echo \"Waiting for cluster initialization.\" echo echo \" This will continually check to see if the API for kubernetes is reachable.\" echo \" This might loop forever if there was some uncaught error during start\" echo \" up.\" echo #This will fail until apiserver salt is updated until $(curl --insecure --user ${KUBE_USER}:${KUBE_PASSWORD} --max-time 5 --fail --output /dev/null --silent https://${KUBE_MASTER_IP}/healthz); do printf \".\" sleep 2 done echo \"Kubernetes cluster created.\" export KUBE_CERT=\"\" export KUBE_KEY=\"\" export CA_CERT=\"\" export CONTEXT=\"rackspace_${INSTANCE_PREFIX}\" create-kubeconfig # Don't bail on errors, we want to be able to print some info. set +e detect-nodes # ensures KUBECONFIG is set get-kubeconfig-basicauth echo \"All nodes may not be online yet, this is okay.\" echo echo \"Kubernetes cluster is running. The master is running at:\" echo echo \" https://${KUBE_MASTER_IP}\" echo echo \"The user name and password to use is located in ${KUBECONFIG:-$DEFAULT_KUBECONFIG}.\" echo echo \"Security note: The server above uses a self signed certificate. This is\" echo \" subject to \"Man in the middle\" type attacks.\" echo } # Perform preparations required to run e2e tests function prepare-e2e() { echo \"Rackspace doesn't need special preparations for e2e tests\" } ", "commid": "kubernetes_pr_45032"}], "negative_passages": []} {"query_id": "q-en-kubernetes-3e41c7a0a85635ba91742439b913fa4ee706ea1bfde3bc5546521b3318c03960", "query": "If another process is altering iptables rules at the time k8s attempts to add iptables rules it will fail with a locking error. k8s should use -w and retry after a delay. Below, an external process (openshift-sdn-node) modifies iptables at roughly the same time and k8s fails without retrying. Related to\nSorry, I misunderstood the -w flag, with it retry logic wouldn't be necessary as it'd wait. Perhaps simply retrying is a better solution as -w flag may not exist everywhere we'd like.\nLooks like -w was to iptables 1.4.20(August 2013).\nYeah, we have to be pretty back-compatible with iptables. We always retry after a few seconds - is there really a bug here? , Scott Dodson wrote:\nIt definitely seems like there is some sort of a problem. We see this randomly on system startup. In our environment OpenShift is launching kubernetes around the same time other systemd units are modifying the firewall config. If one of these others units has the lock kubernetes will report the error Scott mentions and then will never create the required nat'ing for services. Here's how to reproduce this: At this point if you run you will see the exact same error kubernetes reports. It's probably not obvious why you have to set the breakpoint at 257 but if you set the breakpoint at 260 and then step through the code it appears that the compiler has optimized lines 263 and 257 to be the same. In any case running will verify if you have the lock or not. Once you've verified you have the lock just restart OpenShift (running kubernetes v0.14.1-582-gb12d75d): At this point you can press in gdb to continue the iptables invocation and release the lock. Afterwards I will see output from cadvisor every once in a while, however the will show the firewall is still blank: I initially restarted Kubernetes with a blank firewall at 11:21 and yet at 11:38 I still have nothing:\nI probably should have made it more clear, I removed the lock by pressing 'c' in gdb to continue. That's when I started waiting. I'll update my comment to make that more clear.\nSo something for a holding the lock? I am out of your office for the rest of this week, but maybe you can figure out - is there a stale process holding a lock? What kind of lock is it? Flock or semaphore or ... ? On Apr 29, 2015 8:44 AM, \"Brenton Leanhardt\" wrote:\nWhat we're saying is that on startup it's fairly common for multiple systemd units to make firewall additions. If one of them grabs the lock (rarely for more than a few seconds) it will cause kubernetes to fail to initialize iptables and it will never fix itself. The lock only need be grabbed the microsecond kubernetes tries to initialize iptables for this bug to present itself. We know exactly which program in our environment was concurrently making iptables updates. The problem is that kubernetes doesn't appear to be retrying if it fails to initialize iptables.\nOn RHEL the lock is the xtables unix socket lock in iptables to allow multiple programs to update iptables concurrently. Other platforms may use a flock since that is the default upstream. To solve this kubernetes needs one more more of the following: pass the flag to as scott and the logs mention (possibly only available on some platforms). If kubernetes desides to wait for the lock it will need a timeout. retry the firewall initialization quickly and maybe backoff if it still can't get the lock It's probably worth noting that if iptables on a platform doesn't have iptables lock support concurrency is likely broken already in other ways for that platform.\nIt does retry all up tables ops. Unless it somehow gets pathologically phase locked with your agent, or there is some deeper bug. We have e2e tests that prove you can delete tables and it will recreate them. Are you saying that it never retries (did you set -v=4?) or that every retry fails? On RHEL the lock is the xtables unix socket lock in iptables to allow multiple programs to update iptables concurrently. Other platforms may use a flock since that is the default upstream. To solve this kubernetes needs one more more of the following: pass the -w flag to iptables as scott and the logs mention (possibly only available on some platforms). If kubernetes desides to wait for the lock it will need a timeout. retry the firewall initialization quickly and maybe backoff if it still can't get the lock It's probably worth noting that if iptables on a platform doesn't have iptables lock support concurrency is likely broken already in other ways for that platform. \u2014 Reply to this email directly or view it on GitHub .\nI can grab the lock for just the second that kubernetes needs it and then let it go. The kube-proxy nat'ing will never be rebuilt without intervention (restarting kubernetes). That seems like a bug to me and is what my original comment reproduces. In addition, once the environment is in this state it won't even pick up new service nat rules. Other nodes in my environment will pick them up but this one is dead. I highly suspect it has to do with the fact that iptables initialization is failing. We test all the time that once a system is up and running you can clean iptables and it will come back. This is a slightly different case.\nThis seems like a v1 blocker - we see this in every realistic environment we put the Kube-proxy in. Disagreement?\nI still don't understand this issue. It is demonstrably true that the kube-proxy retries all of the iptables logic periodically. It's possible (given the mention of \"won't even pick up new service nat rules\") that this locking tickles a bug in kube-proxy, but I am having trouble reproducing it. Can you run your kube-proxy with -v=4, trigger the bug, and then take a look at the logs? If this is real, yeah it's a 1.0 blocker.\nThe problem is in the initialization process. If you hold the lock while the kube-proxy tries to initialize itself then it will be wedged forever. What you're showing is the kube-proxy can repair the iptables rules after the initialization code has run. The logs I pasted earlier show that after the initialization fails at it will never try again. I tend to run at loglevel 4 by default but I'll try to find time today to run through this again.\nI have a fix coming - can you test the PR before commit?\nI ran the exact same test as mentioned in my original comment. While the debug lines have changed the end result is the same: grab the lock (at this point even will fail) start the kube-proxy release the lock (at this point works) wait.. try creating another service wait.. The rules never come back. Here are the logs:\nSure, I'll test the PR. I'm betting it will apply fairly cleanly on to my OpenShift codebase. If not I'll launch Kubernetes directly.\nMany of those log lines are not part of upstream. In fact, what upstream has is a glog.Fatalf() which will kill kube-proxy, and when it restarts it will try again. Did you guys break it by not actually exiting?\nLooks like we did. We need to put it into a restart loop. ----- Original Message -----\nis teh fix, though I assert we should not need it, and if you need it this fix may not help you anyway. I am still OK to commit this.\nAlthough NewProxier sucks because it doesn't return a usable error. I'll take the todo to clean up that part. ----- Original Message -----\nhey, it logs its error - nothing wrong with that :) , Clayton Coleman wrote:\n:)\nOpened and will move the rest back to openshift.\nI restarted docker and it worked then", "positive_passages": [{"docid": "doc-en-kubernetes-4c9dc51b3201feb404937f941349480ed18fc2def1112cedddbfe043047e10e6", "text": "return nil } glog.Infof(\"Setting Proxy IP to %v\", hostIP) return CreateProxier(loadBalancer, listenIP, iptables, hostIP) return createProxier(loadBalancer, listenIP, iptables, hostIP) } func CreateProxier(loadBalancer LoadBalancer, listenIP net.IP, iptables iptables.Interface, hostIP net.IP) *Proxier { func createProxier(loadBalancer LoadBalancer, listenIP net.IP, iptables iptables.Interface, hostIP net.IP) *Proxier { glog.Infof(\"Initializing iptables\") // Clean up old messes. Ignore erors. iptablesDeleteOld(iptables)", "commid": "kubernetes_pr_8264"}], "negative_passages": []} {"query_id": "q-en-kubernetes-3e41c7a0a85635ba91742439b913fa4ee706ea1bfde3bc5546521b3318c03960", "query": "If another process is altering iptables rules at the time k8s attempts to add iptables rules it will fail with a locking error. k8s should use -w and retry after a delay. Below, an external process (openshift-sdn-node) modifies iptables at roughly the same time and k8s fails without retrying. Related to\nSorry, I misunderstood the -w flag, with it retry logic wouldn't be necessary as it'd wait. Perhaps simply retrying is a better solution as -w flag may not exist everywhere we'd like.\nLooks like -w was to iptables 1.4.20(August 2013).\nYeah, we have to be pretty back-compatible with iptables. We always retry after a few seconds - is there really a bug here? , Scott Dodson wrote:\nIt definitely seems like there is some sort of a problem. We see this randomly on system startup. In our environment OpenShift is launching kubernetes around the same time other systemd units are modifying the firewall config. If one of these others units has the lock kubernetes will report the error Scott mentions and then will never create the required nat'ing for services. Here's how to reproduce this: At this point if you run you will see the exact same error kubernetes reports. It's probably not obvious why you have to set the breakpoint at 257 but if you set the breakpoint at 260 and then step through the code it appears that the compiler has optimized lines 263 and 257 to be the same. In any case running will verify if you have the lock or not. Once you've verified you have the lock just restart OpenShift (running kubernetes v0.14.1-582-gb12d75d): At this point you can press in gdb to continue the iptables invocation and release the lock. Afterwards I will see output from cadvisor every once in a while, however the will show the firewall is still blank: I initially restarted Kubernetes with a blank firewall at 11:21 and yet at 11:38 I still have nothing:\nI probably should have made it more clear, I removed the lock by pressing 'c' in gdb to continue. That's when I started waiting. I'll update my comment to make that more clear.\nSo something for a holding the lock? I am out of your office for the rest of this week, but maybe you can figure out - is there a stale process holding a lock? What kind of lock is it? Flock or semaphore or ... ? On Apr 29, 2015 8:44 AM, \"Brenton Leanhardt\" wrote:\nWhat we're saying is that on startup it's fairly common for multiple systemd units to make firewall additions. If one of them grabs the lock (rarely for more than a few seconds) it will cause kubernetes to fail to initialize iptables and it will never fix itself. The lock only need be grabbed the microsecond kubernetes tries to initialize iptables for this bug to present itself. We know exactly which program in our environment was concurrently making iptables updates. The problem is that kubernetes doesn't appear to be retrying if it fails to initialize iptables.\nOn RHEL the lock is the xtables unix socket lock in iptables to allow multiple programs to update iptables concurrently. Other platforms may use a flock since that is the default upstream. To solve this kubernetes needs one more more of the following: pass the flag to as scott and the logs mention (possibly only available on some platforms). If kubernetes desides to wait for the lock it will need a timeout. retry the firewall initialization quickly and maybe backoff if it still can't get the lock It's probably worth noting that if iptables on a platform doesn't have iptables lock support concurrency is likely broken already in other ways for that platform.\nIt does retry all up tables ops. Unless it somehow gets pathologically phase locked with your agent, or there is some deeper bug. We have e2e tests that prove you can delete tables and it will recreate them. Are you saying that it never retries (did you set -v=4?) or that every retry fails? On RHEL the lock is the xtables unix socket lock in iptables to allow multiple programs to update iptables concurrently. Other platforms may use a flock since that is the default upstream. To solve this kubernetes needs one more more of the following: pass the -w flag to iptables as scott and the logs mention (possibly only available on some platforms). If kubernetes desides to wait for the lock it will need a timeout. retry the firewall initialization quickly and maybe backoff if it still can't get the lock It's probably worth noting that if iptables on a platform doesn't have iptables lock support concurrency is likely broken already in other ways for that platform. \u2014 Reply to this email directly or view it on GitHub .\nI can grab the lock for just the second that kubernetes needs it and then let it go. The kube-proxy nat'ing will never be rebuilt without intervention (restarting kubernetes). That seems like a bug to me and is what my original comment reproduces. In addition, once the environment is in this state it won't even pick up new service nat rules. Other nodes in my environment will pick them up but this one is dead. I highly suspect it has to do with the fact that iptables initialization is failing. We test all the time that once a system is up and running you can clean iptables and it will come back. This is a slightly different case.\nThis seems like a v1 blocker - we see this in every realistic environment we put the Kube-proxy in. Disagreement?\nI still don't understand this issue. It is demonstrably true that the kube-proxy retries all of the iptables logic periodically. It's possible (given the mention of \"won't even pick up new service nat rules\") that this locking tickles a bug in kube-proxy, but I am having trouble reproducing it. Can you run your kube-proxy with -v=4, trigger the bug, and then take a look at the logs? If this is real, yeah it's a 1.0 blocker.\nThe problem is in the initialization process. If you hold the lock while the kube-proxy tries to initialize itself then it will be wedged forever. What you're showing is the kube-proxy can repair the iptables rules after the initialization code has run. The logs I pasted earlier show that after the initialization fails at it will never try again. I tend to run at loglevel 4 by default but I'll try to find time today to run through this again.\nI have a fix coming - can you test the PR before commit?\nI ran the exact same test as mentioned in my original comment. While the debug lines have changed the end result is the same: grab the lock (at this point even will fail) start the kube-proxy release the lock (at this point works) wait.. try creating another service wait.. The rules never come back. Here are the logs:\nSure, I'll test the PR. I'm betting it will apply fairly cleanly on to my OpenShift codebase. If not I'll launch Kubernetes directly.\nMany of those log lines are not part of upstream. In fact, what upstream has is a glog.Fatalf() which will kill kube-proxy, and when it restarts it will try again. Did you guys break it by not actually exiting?\nLooks like we did. We need to put it into a restart loop. ----- Original Message -----\nis teh fix, though I assert we should not need it, and if you need it this fix may not help you anyway. I am still OK to commit this.\nAlthough NewProxier sucks because it doesn't return a usable error. I'll take the todo to clean up that part. ----- Original Message -----\nhey, it logs its error - nothing wrong with that :) , Clayton Coleman wrote:\n:)\nOpened and will move the rest back to openshift.\nI restarted docker and it worked then", "positive_passages": [{"docid": "doc-en-kubernetes-3447c1047c5b50c341e6518a25b0d5b781ccc191d100f4607e4d90a34aa51c01", "text": "}}, }}) p := CreateProxier(lb, net.ParseIP(\"0.0.0.0\"), &fakeIptables{}, net.ParseIP(\"127.0.0.1\")) p := createProxier(lb, net.ParseIP(\"0.0.0.0\"), &fakeIptables{}, net.ParseIP(\"127.0.0.1\")) waitForNumProxyLoops(t, p, 0) svcInfoP, err := p.addServiceOnPort(serviceP, \"TCP\", 0, time.Second)", "commid": "kubernetes_pr_8264"}], "negative_passages": []} {"query_id": "q-en-kubernetes-3e41c7a0a85635ba91742439b913fa4ee706ea1bfde3bc5546521b3318c03960", "query": "If another process is altering iptables rules at the time k8s attempts to add iptables rules it will fail with a locking error. k8s should use -w and retry after a delay. Below, an external process (openshift-sdn-node) modifies iptables at roughly the same time and k8s fails without retrying. Related to\nSorry, I misunderstood the -w flag, with it retry logic wouldn't be necessary as it'd wait. Perhaps simply retrying is a better solution as -w flag may not exist everywhere we'd like.\nLooks like -w was to iptables 1.4.20(August 2013).\nYeah, we have to be pretty back-compatible with iptables. We always retry after a few seconds - is there really a bug here? , Scott Dodson wrote:\nIt definitely seems like there is some sort of a problem. We see this randomly on system startup. In our environment OpenShift is launching kubernetes around the same time other systemd units are modifying the firewall config. If one of these others units has the lock kubernetes will report the error Scott mentions and then will never create the required nat'ing for services. Here's how to reproduce this: At this point if you run you will see the exact same error kubernetes reports. It's probably not obvious why you have to set the breakpoint at 257 but if you set the breakpoint at 260 and then step through the code it appears that the compiler has optimized lines 263 and 257 to be the same. In any case running will verify if you have the lock or not. Once you've verified you have the lock just restart OpenShift (running kubernetes v0.14.1-582-gb12d75d): At this point you can press in gdb to continue the iptables invocation and release the lock. Afterwards I will see output from cadvisor every once in a while, however the will show the firewall is still blank: I initially restarted Kubernetes with a blank firewall at 11:21 and yet at 11:38 I still have nothing:\nI probably should have made it more clear, I removed the lock by pressing 'c' in gdb to continue. That's when I started waiting. I'll update my comment to make that more clear.\nSo something for a holding the lock? I am out of your office for the rest of this week, but maybe you can figure out - is there a stale process holding a lock? What kind of lock is it? Flock or semaphore or ... ? On Apr 29, 2015 8:44 AM, \"Brenton Leanhardt\" wrote:\nWhat we're saying is that on startup it's fairly common for multiple systemd units to make firewall additions. If one of them grabs the lock (rarely for more than a few seconds) it will cause kubernetes to fail to initialize iptables and it will never fix itself. The lock only need be grabbed the microsecond kubernetes tries to initialize iptables for this bug to present itself. We know exactly which program in our environment was concurrently making iptables updates. The problem is that kubernetes doesn't appear to be retrying if it fails to initialize iptables.\nOn RHEL the lock is the xtables unix socket lock in iptables to allow multiple programs to update iptables concurrently. Other platforms may use a flock since that is the default upstream. To solve this kubernetes needs one more more of the following: pass the flag to as scott and the logs mention (possibly only available on some platforms). If kubernetes desides to wait for the lock it will need a timeout. retry the firewall initialization quickly and maybe backoff if it still can't get the lock It's probably worth noting that if iptables on a platform doesn't have iptables lock support concurrency is likely broken already in other ways for that platform.\nIt does retry all up tables ops. Unless it somehow gets pathologically phase locked with your agent, or there is some deeper bug. We have e2e tests that prove you can delete tables and it will recreate them. Are you saying that it never retries (did you set -v=4?) or that every retry fails? On RHEL the lock is the xtables unix socket lock in iptables to allow multiple programs to update iptables concurrently. Other platforms may use a flock since that is the default upstream. To solve this kubernetes needs one more more of the following: pass the -w flag to iptables as scott and the logs mention (possibly only available on some platforms). If kubernetes desides to wait for the lock it will need a timeout. retry the firewall initialization quickly and maybe backoff if it still can't get the lock It's probably worth noting that if iptables on a platform doesn't have iptables lock support concurrency is likely broken already in other ways for that platform. \u2014 Reply to this email directly or view it on GitHub .\nI can grab the lock for just the second that kubernetes needs it and then let it go. The kube-proxy nat'ing will never be rebuilt without intervention (restarting kubernetes). That seems like a bug to me and is what my original comment reproduces. In addition, once the environment is in this state it won't even pick up new service nat rules. Other nodes in my environment will pick them up but this one is dead. I highly suspect it has to do with the fact that iptables initialization is failing. We test all the time that once a system is up and running you can clean iptables and it will come back. This is a slightly different case.\nThis seems like a v1 blocker - we see this in every realistic environment we put the Kube-proxy in. Disagreement?\nI still don't understand this issue. It is demonstrably true that the kube-proxy retries all of the iptables logic periodically. It's possible (given the mention of \"won't even pick up new service nat rules\") that this locking tickles a bug in kube-proxy, but I am having trouble reproducing it. Can you run your kube-proxy with -v=4, trigger the bug, and then take a look at the logs? If this is real, yeah it's a 1.0 blocker.\nThe problem is in the initialization process. If you hold the lock while the kube-proxy tries to initialize itself then it will be wedged forever. What you're showing is the kube-proxy can repair the iptables rules after the initialization code has run. The logs I pasted earlier show that after the initialization fails at it will never try again. I tend to run at loglevel 4 by default but I'll try to find time today to run through this again.\nI have a fix coming - can you test the PR before commit?\nI ran the exact same test as mentioned in my original comment. While the debug lines have changed the end result is the same: grab the lock (at this point even will fail) start the kube-proxy release the lock (at this point works) wait.. try creating another service wait.. The rules never come back. Here are the logs:\nSure, I'll test the PR. I'm betting it will apply fairly cleanly on to my OpenShift codebase. If not I'll launch Kubernetes directly.\nMany of those log lines are not part of upstream. In fact, what upstream has is a glog.Fatalf() which will kill kube-proxy, and when it restarts it will try again. Did you guys break it by not actually exiting?\nLooks like we did. We need to put it into a restart loop. ----- Original Message -----\nis teh fix, though I assert we should not need it, and if you need it this fix may not help you anyway. I am still OK to commit this.\nAlthough NewProxier sucks because it doesn't return a usable error. I'll take the todo to clean up that part. ----- Original Message -----\nhey, it logs its error - nothing wrong with that :) , Clayton Coleman wrote:\n:)\nOpened and will move the rest back to openshift.\nI restarted docker and it worked then", "positive_passages": [{"docid": "doc-en-kubernetes-2692ea95d752e566ea380005856e9dd5663cbcdaa8e7b697eb6ad757627a431f", "text": "serviceQ := ServicePortName{types.NamespacedName{\"testnamespace\", \"echo\"}, \"q\"} serviceX := ServicePortName{types.NamespacedName{\"testnamespace\", \"echo\"}, \"x\"} p := CreateProxier(lb, net.ParseIP(\"0.0.0.0\"), &fakeIptables{}, net.ParseIP(\"127.0.0.1\")) p := createProxier(lb, net.ParseIP(\"0.0.0.0\"), &fakeIptables{}, net.ParseIP(\"127.0.0.1\")) waitForNumProxyLoops(t, p, 0) p.OnUpdate([]api.Service{{", "commid": "kubernetes_pr_8264"}], "negative_passages": []} {"query_id": "q-en-kubernetes-3e41c7a0a85635ba91742439b913fa4ee706ea1bfde3bc5546521b3318c03960", "query": "If another process is altering iptables rules at the time k8s attempts to add iptables rules it will fail with a locking error. k8s should use -w and retry after a delay. Below, an external process (openshift-sdn-node) modifies iptables at roughly the same time and k8s fails without retrying. Related to\nSorry, I misunderstood the -w flag, with it retry logic wouldn't be necessary as it'd wait. Perhaps simply retrying is a better solution as -w flag may not exist everywhere we'd like.\nLooks like -w was to iptables 1.4.20(August 2013).\nYeah, we have to be pretty back-compatible with iptables. We always retry after a few seconds - is there really a bug here? , Scott Dodson wrote:\nIt definitely seems like there is some sort of a problem. We see this randomly on system startup. In our environment OpenShift is launching kubernetes around the same time other systemd units are modifying the firewall config. If one of these others units has the lock kubernetes will report the error Scott mentions and then will never create the required nat'ing for services. Here's how to reproduce this: At this point if you run you will see the exact same error kubernetes reports. It's probably not obvious why you have to set the breakpoint at 257 but if you set the breakpoint at 260 and then step through the code it appears that the compiler has optimized lines 263 and 257 to be the same. In any case running will verify if you have the lock or not. Once you've verified you have the lock just restart OpenShift (running kubernetes v0.14.1-582-gb12d75d): At this point you can press in gdb to continue the iptables invocation and release the lock. Afterwards I will see output from cadvisor every once in a while, however the will show the firewall is still blank: I initially restarted Kubernetes with a blank firewall at 11:21 and yet at 11:38 I still have nothing:\nI probably should have made it more clear, I removed the lock by pressing 'c' in gdb to continue. That's when I started waiting. I'll update my comment to make that more clear.\nSo something for a holding the lock? I am out of your office for the rest of this week, but maybe you can figure out - is there a stale process holding a lock? What kind of lock is it? Flock or semaphore or ... ? On Apr 29, 2015 8:44 AM, \"Brenton Leanhardt\" wrote:\nWhat we're saying is that on startup it's fairly common for multiple systemd units to make firewall additions. If one of them grabs the lock (rarely for more than a few seconds) it will cause kubernetes to fail to initialize iptables and it will never fix itself. The lock only need be grabbed the microsecond kubernetes tries to initialize iptables for this bug to present itself. We know exactly which program in our environment was concurrently making iptables updates. The problem is that kubernetes doesn't appear to be retrying if it fails to initialize iptables.\nOn RHEL the lock is the xtables unix socket lock in iptables to allow multiple programs to update iptables concurrently. Other platforms may use a flock since that is the default upstream. To solve this kubernetes needs one more more of the following: pass the flag to as scott and the logs mention (possibly only available on some platforms). If kubernetes desides to wait for the lock it will need a timeout. retry the firewall initialization quickly and maybe backoff if it still can't get the lock It's probably worth noting that if iptables on a platform doesn't have iptables lock support concurrency is likely broken already in other ways for that platform.\nIt does retry all up tables ops. Unless it somehow gets pathologically phase locked with your agent, or there is some deeper bug. We have e2e tests that prove you can delete tables and it will recreate them. Are you saying that it never retries (did you set -v=4?) or that every retry fails? On RHEL the lock is the xtables unix socket lock in iptables to allow multiple programs to update iptables concurrently. Other platforms may use a flock since that is the default upstream. To solve this kubernetes needs one more more of the following: pass the -w flag to iptables as scott and the logs mention (possibly only available on some platforms). If kubernetes desides to wait for the lock it will need a timeout. retry the firewall initialization quickly and maybe backoff if it still can't get the lock It's probably worth noting that if iptables on a platform doesn't have iptables lock support concurrency is likely broken already in other ways for that platform. \u2014 Reply to this email directly or view it on GitHub .\nI can grab the lock for just the second that kubernetes needs it and then let it go. The kube-proxy nat'ing will never be rebuilt without intervention (restarting kubernetes). That seems like a bug to me and is what my original comment reproduces. In addition, once the environment is in this state it won't even pick up new service nat rules. Other nodes in my environment will pick them up but this one is dead. I highly suspect it has to do with the fact that iptables initialization is failing. We test all the time that once a system is up and running you can clean iptables and it will come back. This is a slightly different case.\nThis seems like a v1 blocker - we see this in every realistic environment we put the Kube-proxy in. Disagreement?\nI still don't understand this issue. It is demonstrably true that the kube-proxy retries all of the iptables logic periodically. It's possible (given the mention of \"won't even pick up new service nat rules\") that this locking tickles a bug in kube-proxy, but I am having trouble reproducing it. Can you run your kube-proxy with -v=4, trigger the bug, and then take a look at the logs? If this is real, yeah it's a 1.0 blocker.\nThe problem is in the initialization process. If you hold the lock while the kube-proxy tries to initialize itself then it will be wedged forever. What you're showing is the kube-proxy can repair the iptables rules after the initialization code has run. The logs I pasted earlier show that after the initialization fails at it will never try again. I tend to run at loglevel 4 by default but I'll try to find time today to run through this again.\nI have a fix coming - can you test the PR before commit?\nI ran the exact same test as mentioned in my original comment. While the debug lines have changed the end result is the same: grab the lock (at this point even will fail) start the kube-proxy release the lock (at this point works) wait.. try creating another service wait.. The rules never come back. Here are the logs:\nSure, I'll test the PR. I'm betting it will apply fairly cleanly on to my OpenShift codebase. If not I'll launch Kubernetes directly.\nMany of those log lines are not part of upstream. In fact, what upstream has is a glog.Fatalf() which will kill kube-proxy, and when it restarts it will try again. Did you guys break it by not actually exiting?\nLooks like we did. We need to put it into a restart loop. ----- Original Message -----\nis teh fix, though I assert we should not need it, and if you need it this fix may not help you anyway. I am still OK to commit this.\nAlthough NewProxier sucks because it doesn't return a usable error. I'll take the todo to clean up that part. ----- Original Message -----\nhey, it logs its error - nothing wrong with that :) , Clayton Coleman wrote:\n:)\nOpened and will move the rest back to openshift.\nI restarted docker and it worked then", "positive_passages": [{"docid": "doc-en-kubernetes-08371bdcb170d85f10964f5beebcbc7e6395698bcc3b36f2007dbbfe894f9887", "text": "}, }) p := CreateProxier(lb, net.ParseIP(\"0.0.0.0\"), &fakeIptables{}, net.ParseIP(\"127.0.0.1\")) p := createProxier(lb, net.ParseIP(\"0.0.0.0\"), &fakeIptables{}, net.ParseIP(\"127.0.0.1\")) waitForNumProxyLoops(t, p, 0) svcInfo, err := p.addServiceOnPort(service, \"UDP\", 0, time.Second)", "commid": "kubernetes_pr_8264"}], "negative_passages": []} {"query_id": "q-en-kubernetes-3e41c7a0a85635ba91742439b913fa4ee706ea1bfde3bc5546521b3318c03960", "query": "If another process is altering iptables rules at the time k8s attempts to add iptables rules it will fail with a locking error. k8s should use -w and retry after a delay. Below, an external process (openshift-sdn-node) modifies iptables at roughly the same time and k8s fails without retrying. Related to\nSorry, I misunderstood the -w flag, with it retry logic wouldn't be necessary as it'd wait. Perhaps simply retrying is a better solution as -w flag may not exist everywhere we'd like.\nLooks like -w was to iptables 1.4.20(August 2013).\nYeah, we have to be pretty back-compatible with iptables. We always retry after a few seconds - is there really a bug here? , Scott Dodson wrote:\nIt definitely seems like there is some sort of a problem. We see this randomly on system startup. In our environment OpenShift is launching kubernetes around the same time other systemd units are modifying the firewall config. If one of these others units has the lock kubernetes will report the error Scott mentions and then will never create the required nat'ing for services. Here's how to reproduce this: At this point if you run you will see the exact same error kubernetes reports. It's probably not obvious why you have to set the breakpoint at 257 but if you set the breakpoint at 260 and then step through the code it appears that the compiler has optimized lines 263 and 257 to be the same. In any case running will verify if you have the lock or not. Once you've verified you have the lock just restart OpenShift (running kubernetes v0.14.1-582-gb12d75d): At this point you can press in gdb to continue the iptables invocation and release the lock. Afterwards I will see output from cadvisor every once in a while, however the will show the firewall is still blank: I initially restarted Kubernetes with a blank firewall at 11:21 and yet at 11:38 I still have nothing:\nI probably should have made it more clear, I removed the lock by pressing 'c' in gdb to continue. That's when I started waiting. I'll update my comment to make that more clear.\nSo something for a holding the lock? I am out of your office for the rest of this week, but maybe you can figure out - is there a stale process holding a lock? What kind of lock is it? Flock or semaphore or ... ? On Apr 29, 2015 8:44 AM, \"Brenton Leanhardt\" wrote:\nWhat we're saying is that on startup it's fairly common for multiple systemd units to make firewall additions. If one of them grabs the lock (rarely for more than a few seconds) it will cause kubernetes to fail to initialize iptables and it will never fix itself. The lock only need be grabbed the microsecond kubernetes tries to initialize iptables for this bug to present itself. We know exactly which program in our environment was concurrently making iptables updates. The problem is that kubernetes doesn't appear to be retrying if it fails to initialize iptables.\nOn RHEL the lock is the xtables unix socket lock in iptables to allow multiple programs to update iptables concurrently. Other platforms may use a flock since that is the default upstream. To solve this kubernetes needs one more more of the following: pass the flag to as scott and the logs mention (possibly only available on some platforms). If kubernetes desides to wait for the lock it will need a timeout. retry the firewall initialization quickly and maybe backoff if it still can't get the lock It's probably worth noting that if iptables on a platform doesn't have iptables lock support concurrency is likely broken already in other ways for that platform.\nIt does retry all up tables ops. Unless it somehow gets pathologically phase locked with your agent, or there is some deeper bug. We have e2e tests that prove you can delete tables and it will recreate them. Are you saying that it never retries (did you set -v=4?) or that every retry fails? On RHEL the lock is the xtables unix socket lock in iptables to allow multiple programs to update iptables concurrently. Other platforms may use a flock since that is the default upstream. To solve this kubernetes needs one more more of the following: pass the -w flag to iptables as scott and the logs mention (possibly only available on some platforms). If kubernetes desides to wait for the lock it will need a timeout. retry the firewall initialization quickly and maybe backoff if it still can't get the lock It's probably worth noting that if iptables on a platform doesn't have iptables lock support concurrency is likely broken already in other ways for that platform. \u2014 Reply to this email directly or view it on GitHub .\nI can grab the lock for just the second that kubernetes needs it and then let it go. The kube-proxy nat'ing will never be rebuilt without intervention (restarting kubernetes). That seems like a bug to me and is what my original comment reproduces. In addition, once the environment is in this state it won't even pick up new service nat rules. Other nodes in my environment will pick them up but this one is dead. I highly suspect it has to do with the fact that iptables initialization is failing. We test all the time that once a system is up and running you can clean iptables and it will come back. This is a slightly different case.\nThis seems like a v1 blocker - we see this in every realistic environment we put the Kube-proxy in. Disagreement?\nI still don't understand this issue. It is demonstrably true that the kube-proxy retries all of the iptables logic periodically. It's possible (given the mention of \"won't even pick up new service nat rules\") that this locking tickles a bug in kube-proxy, but I am having trouble reproducing it. Can you run your kube-proxy with -v=4, trigger the bug, and then take a look at the logs? If this is real, yeah it's a 1.0 blocker.\nThe problem is in the initialization process. If you hold the lock while the kube-proxy tries to initialize itself then it will be wedged forever. What you're showing is the kube-proxy can repair the iptables rules after the initialization code has run. The logs I pasted earlier show that after the initialization fails at it will never try again. I tend to run at loglevel 4 by default but I'll try to find time today to run through this again.\nI have a fix coming - can you test the PR before commit?\nI ran the exact same test as mentioned in my original comment. While the debug lines have changed the end result is the same: grab the lock (at this point even will fail) start the kube-proxy release the lock (at this point works) wait.. try creating another service wait.. The rules never come back. Here are the logs:\nSure, I'll test the PR. I'm betting it will apply fairly cleanly on to my OpenShift codebase. If not I'll launch Kubernetes directly.\nMany of those log lines are not part of upstream. In fact, what upstream has is a glog.Fatalf() which will kill kube-proxy, and when it restarts it will try again. Did you guys break it by not actually exiting?\nLooks like we did. We need to put it into a restart loop. ----- Original Message -----\nis teh fix, though I assert we should not need it, and if you need it this fix may not help you anyway. I am still OK to commit this.\nAlthough NewProxier sucks because it doesn't return a usable error. I'll take the todo to clean up that part. ----- Original Message -----\nhey, it logs its error - nothing wrong with that :) , Clayton Coleman wrote:\n:)\nOpened and will move the rest back to openshift.\nI restarted docker and it worked then", "positive_passages": [{"docid": "doc-en-kubernetes-9918f6181845d4d93d373e98806ba39e99c86437fbb3fb95fe4b92f814c95a8f", "text": "}, }) p := CreateProxier(lb, net.ParseIP(\"0.0.0.0\"), &fakeIptables{}, net.ParseIP(\"127.0.0.1\")) p := createProxier(lb, net.ParseIP(\"0.0.0.0\"), &fakeIptables{}, net.ParseIP(\"127.0.0.1\")) waitForNumProxyLoops(t, p, 0) svcInfo, err := p.addServiceOnPort(service, \"TCP\", 0, time.Second)", "commid": "kubernetes_pr_8264"}], "negative_passages": []} {"query_id": "q-en-kubernetes-83d108f0f5dc476eb3ccc1bd3ac7ca0c88769da6703901d0bcbe64c096876bb1", "query": "Following on from the Kubelet should accept YAML pod specifications. Currnelty it supports only JSON pod specifications.", "positive_passages": [{"docid": "doc-en-kubernetes-f9874b24eec205868f1f38d0ed2b3fdc7edcb3e69b50bd6ef86bb89285c53d83", "text": "\"github.com/GoogleCloudPlatform/kubernetes/pkg/kubelet\" \"github.com/GoogleCloudPlatform/kubernetes/pkg/types\" \"github.com/GoogleCloudPlatform/kubernetes/pkg/util\" utilyaml \"github.com/GoogleCloudPlatform/kubernetes/pkg/util/yaml\" \"github.com/ghodss/yaml\" \"github.com/golang/glog\"", "commid": "kubernetes_pr_7515"}], "negative_passages": []} {"query_id": "q-en-kubernetes-83d108f0f5dc476eb3ccc1bd3ac7ca0c88769da6703901d0bcbe64c096876bb1", "query": "Following on from the Kubelet should accept YAML pod specifications. Currnelty it supports only JSON pod specifications.", "positive_passages": [{"docid": "doc-en-kubernetes-3430a45b60fa8a9bce11719db20d0e292df191242d85696bd5438217be95cbec", "text": "type defaultFunc func(pod *api.Pod) error func tryDecodeSinglePod(data []byte, defaultFn defaultFunc) (parsed bool, pod *api.Pod, err error) { obj, err := api.Scheme.Decode(data) // JSON is valid YAML, so this should work for everything. json, err := utilyaml.ToJSON(data) if err != nil { return false, nil, err } obj, err := api.Scheme.Decode(json) if err != nil { return false, pod, err }", "commid": "kubernetes_pr_7515"}], "negative_passages": []} {"query_id": "q-en-kubernetes-83d108f0f5dc476eb3ccc1bd3ac7ca0c88769da6703901d0bcbe64c096876bb1", "query": "Following on from the Kubelet should accept YAML pod specifications. Currnelty it supports only JSON pod specifications.", "positive_passages": [{"docid": "doc-en-kubernetes-539cf25e446796fb3f0dae58672c223b43b6a20296af51f15f49acd677bff184", "text": "} func tryDecodePodList(data []byte, defaultFn defaultFunc) (parsed bool, pods api.PodList, err error) { obj, err := api.Scheme.Decode(data) json, err := utilyaml.ToJSON(data) if err != nil { return false, api.PodList{}, err } obj, err := api.Scheme.Decode(json) if err != nil { return false, pods, err }", "commid": "kubernetes_pr_7515"}], "negative_passages": []} {"query_id": "q-en-kubernetes-83d108f0f5dc476eb3ccc1bd3ac7ca0c88769da6703901d0bcbe64c096876bb1", "query": "Following on from the Kubelet should accept YAML pod specifications. Currnelty it supports only JSON pod specifications.", "positive_passages": [{"docid": "doc-en-kubernetes-fa416530ef5408b8f05d686a75822492eadb2e92808e4558d8cef5c473ed1410", "text": " /* Copyright 2014 Google Inc. All rights reserved. Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ package config import ( \"reflect\" \"testing\" \"github.com/GoogleCloudPlatform/kubernetes/pkg/api\" \"github.com/GoogleCloudPlatform/kubernetes/pkg/api/testapi\" \"github.com/ghodss/yaml\" ) func noDefault(*api.Pod) error { return nil } func TestDecodeSinglePod(t *testing.T) { pod := &api.Pod{ TypeMeta: api.TypeMeta{ APIVersion: \"\", }, ObjectMeta: api.ObjectMeta{ Name: \"test\", UID: \"12345\", Namespace: \"mynamespace\", }, Spec: api.PodSpec{ RestartPolicy: api.RestartPolicyAlways, DNSPolicy: api.DNSClusterFirst, Containers: []api.Container{{ Name: \"image\", Image: \"test/image\", ImagePullPolicy: \"IfNotPresent\", TerminationMessagePath: \"/dev/termination-log\", }}, }, } json, err := testapi.Codec().Encode(pod) if err != nil { t.Errorf(\"unexpected error: %v\", err) } parsed, podOut, err := tryDecodeSinglePod(json, noDefault) if testapi.Version() == \"v1beta1\" { // v1beta1 conversion leaves empty lists that should be nil podOut.Spec.Containers[0].Resources.Limits = nil podOut.Spec.Containers[0].Resources.Requests = nil } if !parsed { t.Errorf(\"expected to have parsed file: (%s)\", string(json)) } if err != nil { t.Errorf(\"unexpected error: %v (%s)\", err, string(json)) } if !reflect.DeepEqual(pod, podOut) { t.Errorf(\"expected:n%#vngot:n%#vn%s\", pod, podOut, string(json)) } externalPod, err := testapi.Converter().ConvertToVersion(pod, \"v1beta3\") if err != nil { t.Errorf(\"unexpected error: %v\", err) } yaml, err := yaml.Marshal(externalPod) if err != nil { t.Errorf(\"unexpected error: %v\", err) } parsed, podOut, err = tryDecodeSinglePod(yaml, noDefault) if !parsed { t.Errorf(\"expected to have parsed file: (%s)\", string(yaml)) } if err != nil { t.Errorf(\"unexpected error: %v (%s)\", err, string(yaml)) } if !reflect.DeepEqual(pod, podOut) { t.Errorf(\"expected:n%#vngot:n%#vn%s\", pod, podOut, string(yaml)) } } func TestDecodePodList(t *testing.T) { pod := &api.Pod{ TypeMeta: api.TypeMeta{ APIVersion: \"\", }, ObjectMeta: api.ObjectMeta{ Name: \"test\", UID: \"12345\", Namespace: \"mynamespace\", }, Spec: api.PodSpec{ RestartPolicy: api.RestartPolicyAlways, DNSPolicy: api.DNSClusterFirst, Containers: []api.Container{{ Name: \"image\", Image: \"test/image\", ImagePullPolicy: \"IfNotPresent\", TerminationMessagePath: \"/dev/termination-log\", }}, }, } podList := &api.PodList{ Items: []api.Pod{*pod}, } json, err := testapi.Codec().Encode(podList) if err != nil { t.Errorf(\"unexpected error: %v\", err) } parsed, podListOut, err := tryDecodePodList(json, noDefault) if testapi.Version() == \"v1beta1\" { // v1beta1 conversion leaves empty lists that should be nil podListOut.Items[0].Spec.Containers[0].Resources.Limits = nil podListOut.Items[0].Spec.Containers[0].Resources.Requests = nil } if !parsed { t.Errorf(\"expected to have parsed file: (%s)\", string(json)) } if err != nil { t.Errorf(\"unexpected error: %v (%s)\", err, string(json)) } if !reflect.DeepEqual(podList, &podListOut) { t.Errorf(\"expected:n%#vngot:n%#vn%s\", podList, &podListOut, string(json)) } externalPodList, err := testapi.Converter().ConvertToVersion(podList, \"v1beta3\") if err != nil { t.Errorf(\"unexpected error: %v\", err) } yaml, err := yaml.Marshal(externalPodList) if err != nil { t.Errorf(\"unexpected error: %v\", err) } parsed, podListOut, err = tryDecodePodList(yaml, noDefault) if !parsed { t.Errorf(\"expected to have parsed file: (%s)\", string(yaml)) } if err != nil { t.Errorf(\"unexpected error: %v (%s)\", err, string(yaml)) } if !reflect.DeepEqual(podList, &podListOut) { t.Errorf(\"expected:n%#vngot:n%#vn%s\", pod, &podListOut, string(yaml)) } } ", "commid": "kubernetes_pr_7515"}], "negative_passages": []} {"query_id": "q-en-kubernetes-528257a8d86f973b92cc017b02ead0ef3ea1934cb7f6381428f555668a8c1a21", "query": "As the title suggests, I can find no documentation on KUBEADMISSIONCONTROL (from the apiserver configuration), and I don't know golang well enough to determine the answer (yet) from the source code. Google search yields only 3 hits today, and all of them are copies of the configuration file itself with no commentary. As an admin, I'd be grateful for simple documentation on all the variables, their possible settings, and what those really mean. If that already exists, please identify where it exists. Though my formal issue is the documentation, the problem that leads me here is that my pods can't query the apiserver. Given the following, my connection is reset: Any assistance would be greatly appreciated, though my formal issue remains the lack of thorough documentation on the configuration. I'll continue learning golang in order to try to figure out what my problem may be.\nI agree a document is needed, will look to put something together describing what each option does.\nThanks! :)\nThat being said, I recommend we scope this issue to just documentation being needed, and if you need assistance with your other topic, treat it as a separate issue.\nQuite reasonable! If you think that documentation will be before the weekend, I'll wait until that's available and see if I can figure it out myself; otherwise, I'll make a separate issue for that.", "positive_passages": [{"docid": "doc-en-kubernetes-79c4280a882321b8dcfa3c1401200cbcd1da5d8baccc1d70fa767cc195e7bf1f", "text": " # Admission Controllers ## What are they? An admission control plug-in is a piece of code that intercepts requests to the Kubernetes API server prior to persistence of the object. The plug-in code is in the API server process and must be compiled into the binary in order to be used at this time. Each admission control plug-in is run in sequence before a request is accepted into the cluster. If any of the plug-ins in the sequence reject the request, the entire request is rejected immediately and an error is returned to the end-user. Admission control plug-ins may mutate the incoming object in some cases to apply system configured defaults. In addition, admission control plug-ins may mutate related resources as part of request processing to do things like increment quota usage. ## Why do I need them? Many advanced features in Kubernetes require an admission control plug-in to be enabled in order to properly support the feature. As a result, a Kubernetes API server that is not properly configured with the right set of admission control plug-ins is an incomplete server and will not support all the features you expect. ## How do I turn on an admission control plug-in? The Kubernetes API server supports a flag, ```admission_control``` that takes a comma-delimited, ordered list of admission control choices to invoke prior to modifying objects in the cluster. ## What does each plug-in do? ### AlwaysAdmit This plug-in will accept all incoming requests made to the Kubernetes API server. ### AlwaysDeny This plug-in will reject all mutating requests made to the Kubernetes API server. It's largely intended for testing purposes and is not recommended for usage in a real deployment. ### DenyExecOnPrivileged This plug-in will intercept all requests to exec a command in a pod if that pod has a privileged container. If your cluster supports privileged containers, and you want to restrict the ability of end-users to exec commands in those containers, we strongly encourage enabling this plug-in. ### ServiceAccount This plug-in limits admission of Pod creation requests based on the Pod's ```ServiceAccount```. 1. If the pod does not have a ```ServiceAccount```, it modifies the pod's ```ServiceAccount``` to \"default\". 2. It ensures that the ```ServiceAccount``` referenced by a pod exists. 3. If ```LimitSecretReferences``` is true, it rejects the pod if the pod references ```Secret``` objects which the pods ```ServiceAccount``` does not reference. 4. If the pod does not contain any ```ImagePullSecrets```, the ```ImagePullSecrets``` of the ```ServiceAccount``` are added to the pod. 5. If ```MountServiceAccountToken``` is true, it adds a ```VolumeMount``` with the pod's ```ServiceAccount``` API token secret to containers in the pod. We strongly recommend using this plug-in if you intend to make use of Kubernetes ```ServiceAccount``` objects. ### SecurityContextDeny This plug-in will deny any ```SecurityContext``` that defines options that were not available on the ```Container```. ### ResourceQuota This plug-in will observe the incoming request and ensure that it does not violate any of the constraints enumerated in the ```ResourceQuota``` object in a ```Namespace```. If you are using ```ResourceQuota``` objects in your Kubernetes deployment, you MUST use this plug-in to enforce quota constraints. It is strongly encouraged that this plug-in is configured last in the sequence of admission control plug-ins. This is so that quota is not prematurely incremented only for the request to be rejected later in admission control. ### LimitRanger This plug-in will observe the incoming request and ensure that it does not violate any of the constraints enumerated in the ```LimitRange``` object in a ```Namespace```. If you are using ```LimitRange``` objects in your Kubernetes deployment, you MUST use this plug-in to enforce those constraints. ### NamespaceExists This plug-in will observe all incoming requests that attempt to create a resource in a Kubernetes ```Namespace``` and reject the request if the ```Namespace``` was not previously created. We strongly recommend running this plug-in to ensure integrity of your data. ### NamespaceAutoProvision (deprecated) This plug-in will observe all incoming requests that attempt to create a resource in a Kubernetes ```Namespace``` and create a new ```Namespace``` if one did not already exist previously. We strongly recommend ```NamespaceExists``` over ```NamespaceAutoProvision```. ### NamespaceLifecycle This plug-in enforces that a ```Namespace``` that is undergoing termination cannot have new content created in it. A ```Namespace``` deletion kicks off a sequence of operations that remove all content (pods, services, etc.) in that namespace. In order to enforce integrity of that process, we strongly recommend running this plug-in. Once ```NamespaceAutoProvision``` is deprecated, we anticipate ```NamespaceLifecycle``` and ```NamespaceExists``` will be merged into a single plug-in that enforces the life-cycle of a ```Namespace``` in Kubernetes. ## Is there a recommended set of plug-ins to use? Yes. For Kubernetes 1.0, we strongly recommend running the following set of admission control plug-ins: ```shell --admission_control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota ``` ", "commid": "kubernetes_pr_8936"}], "negative_passages": []} {"query_id": "q-en-kubernetes-5cad7f0b9d7423b15ef1aff818cb7d7539e6f84907a939c490fb9042ee93c984", "query": "We seem to have an occasional consistency issue between the client and server versions on our Jenkins GCE e2e project. Build 6295 failed this morning because the client version was newer than the server version: The server version matches the build was was immediately running in 6294, which was aborted early. I wonder if (somehow) the cleanup didn't work properly? (Which is a bit worrying if so.) Here's the version from 6294: A short while later, build 6297 failed, this time with the client version older than the server version: This one makes a little bit more sense: there was a new build published between downloading the client/test tarballs and setting up the cluster. I'm guessing that somewhere in the cluster setup scripts the latest CI release file was read again, rather than using the version read for the client/test tarballs. (I haven't yet worked out where exactly it's read, but it should be an easier fix.) cc\nA side issue: we should probably figure out why the build git client is being marked dirty. Since we're checking out the head version from GitHub, it should ideally be a clean client.\nThe -dirty bit is now fixed by .\nThis is caused by different Jenkins jobs uploading to the same GCS bucket and clobbering each other. In we see: Over in\nThis also explains the issues seen in ; the job is also running in the same project.\nAh, good sleuthing. We avoided this issue in the PR builds by setting (used ), but maybe we should always be setting to include the Jenkins job name (and optionally executor/node number).\nThe original idea was to run every e2e job in it's own project, to avoid these sorts of name clashes. But we seem to have regressed and now have many jobs sharing the same projects. Is project-per-job not a better approach? It also helps with resource leaks and the likes. Just a thought.\nI'd argue this is actually uncovering a (dev) deployment bug; I should be able to deploy different versions of Kubernetes to the same project. I know you and I disagree (and probably always will) on the strategy here. :smile:\nfwiw I think I agree with that if project-per-job avoids this issue and helps identify resource leaks, it's a pragmatic near-term improvement in the state of the world that would result in more benefit than the cost of changing all the everythings to support multi-job-in-project. Re: devs deploying different versions of k8s to the same project, P3 feature request?\nI was just about to suggest the same thing. I will file a separate issue, and fix this in the short term an easier way. is out to make it easier to isolate a run to a project.\nAlso it seems like a reasonable thing (higher than P2) to want to deploy multiple clusters to the same project safely in prod, not just dev. Disagree?\nMultiple clusters in prod using well-defined build versions such as v1.1.4 = P0. Multiple clusters in prod using our cooked-up test-building system = non-goal?\nSGTM; this is prod code, though, not test code; it's used every time we call , regardless of whether it's a CI build or released version.\nFYI: kubernetes-e2e-gce-reboot/ has one reboot test failed due to kube-proxy image change. But that build doesn't include pr yet.\nlooks like it. That run used version , but the first build with was .", "positive_passages": [{"docid": "doc-en-kubernetes-e06112a56a76529d48d6921c89004d4d28702de270d0190309013ca2dca3f8c2", "text": "KUBE_PROMPT_FOR_UPDATE=y KUBE_SKIP_UPDATE=${KUBE_SKIP_UPDATE-\"n\"} # Suffix to append to the staging path used for the server tars. Useful if # multiple versions of the server are being used in the same project # simultaneously (e.g. on Jenkins). KUBE_GCS_STAGING_PATH_SUFFIX=${KUBE_GCS_STAGING_PATH_SUFFIX-\"\"} # How long (in seconds) to wait for cluster initialization. KUBE_CLUSTER_INITIALIZATION_TIMEOUT=${KUBE_CLUSTER_INITIALIZATION_TIMEOUT:-300}", "commid": "kubernetes_pr_20036"}], "negative_passages": []} {"query_id": "q-en-kubernetes-5cad7f0b9d7423b15ef1aff818cb7d7539e6f84907a939c490fb9042ee93c984", "query": "We seem to have an occasional consistency issue between the client and server versions on our Jenkins GCE e2e project. Build 6295 failed this morning because the client version was newer than the server version: The server version matches the build was was immediately running in 6294, which was aborted early. I wonder if (somehow) the cleanup didn't work properly? (Which is a bit worrying if so.) Here's the version from 6294: A short while later, build 6297 failed, this time with the client version older than the server version: This one makes a little bit more sense: there was a new build published between downloading the client/test tarballs and setting up the cluster. I'm guessing that somewhere in the cluster setup scripts the latest CI release file was read again, rather than using the version read for the client/test tarballs. (I haven't yet worked out where exactly it's read, but it should be an easier fix.) cc\nA side issue: we should probably figure out why the build git client is being marked dirty. Since we're checking out the head version from GitHub, it should ideally be a clean client.\nThe -dirty bit is now fixed by .\nThis is caused by different Jenkins jobs uploading to the same GCS bucket and clobbering each other. In we see: Over in\nThis also explains the issues seen in ; the job is also running in the same project.\nAh, good sleuthing. We avoided this issue in the PR builds by setting (used ), but maybe we should always be setting to include the Jenkins job name (and optionally executor/node number).\nThe original idea was to run every e2e job in it's own project, to avoid these sorts of name clashes. But we seem to have regressed and now have many jobs sharing the same projects. Is project-per-job not a better approach? It also helps with resource leaks and the likes. Just a thought.\nI'd argue this is actually uncovering a (dev) deployment bug; I should be able to deploy different versions of Kubernetes to the same project. I know you and I disagree (and probably always will) on the strategy here. :smile:\nfwiw I think I agree with that if project-per-job avoids this issue and helps identify resource leaks, it's a pragmatic near-term improvement in the state of the world that would result in more benefit than the cost of changing all the everythings to support multi-job-in-project. Re: devs deploying different versions of k8s to the same project, P3 feature request?\nI was just about to suggest the same thing. I will file a separate issue, and fix this in the short term an easier way. is out to make it easier to isolate a run to a project.\nAlso it seems like a reasonable thing (higher than P2) to want to deploy multiple clusters to the same project safely in prod, not just dev. Disagree?\nMultiple clusters in prod using well-defined build versions such as v1.1.4 = P0. Multiple clusters in prod using our cooked-up test-building system = non-goal?\nSGTM; this is prod code, though, not test code; it's used every time we call , regardless of whether it's a CI build or released version.\nFYI: kubernetes-e2e-gce-reboot/ has one reboot test failed due to kube-proxy image change. But that build doesn't include pr yet.\nlooks like it. That run used version , but the first build with was .", "positive_passages": [{"docid": "doc-en-kubernetes-91ef31c9ba0c2473d0e9da5ed4a4643c4cb38ca8a81fd5df7dfe87230fc05b92", "text": "gsutil mb \"${staging_bucket}\" fi local -r staging_path=\"${staging_bucket}/devel${KUBE_GCS_STAGING_PATH_SUFFIX}\" local -r staging_path=\"${staging_bucket}/${INSTANCE_PREFIX}-devel\" SERVER_BINARY_TAR_HASH=$(sha1sum-file \"${SERVER_BINARY_TAR}\") SALT_TAR_HASH=$(sha1sum-file \"${SALT_TAR}\")", "commid": "kubernetes_pr_20036"}], "negative_passages": []} {"query_id": "q-en-kubernetes-5cad7f0b9d7423b15ef1aff818cb7d7539e6f84907a939c490fb9042ee93c984", "query": "We seem to have an occasional consistency issue between the client and server versions on our Jenkins GCE e2e project. Build 6295 failed this morning because the client version was newer than the server version: The server version matches the build was was immediately running in 6294, which was aborted early. I wonder if (somehow) the cleanup didn't work properly? (Which is a bit worrying if so.) Here's the version from 6294: A short while later, build 6297 failed, this time with the client version older than the server version: This one makes a little bit more sense: there was a new build published between downloading the client/test tarballs and setting up the cluster. I'm guessing that somewhere in the cluster setup scripts the latest CI release file was read again, rather than using the version read for the client/test tarballs. (I haven't yet worked out where exactly it's read, but it should be an easier fix.) cc\nA side issue: we should probably figure out why the build git client is being marked dirty. Since we're checking out the head version from GitHub, it should ideally be a clean client.\nThe -dirty bit is now fixed by .\nThis is caused by different Jenkins jobs uploading to the same GCS bucket and clobbering each other. In we see: Over in\nThis also explains the issues seen in ; the job is also running in the same project.\nAh, good sleuthing. We avoided this issue in the PR builds by setting (used ), but maybe we should always be setting to include the Jenkins job name (and optionally executor/node number).\nThe original idea was to run every e2e job in it's own project, to avoid these sorts of name clashes. But we seem to have regressed and now have many jobs sharing the same projects. Is project-per-job not a better approach? It also helps with resource leaks and the likes. Just a thought.\nI'd argue this is actually uncovering a (dev) deployment bug; I should be able to deploy different versions of Kubernetes to the same project. I know you and I disagree (and probably always will) on the strategy here. :smile:\nfwiw I think I agree with that if project-per-job avoids this issue and helps identify resource leaks, it's a pragmatic near-term improvement in the state of the world that would result in more benefit than the cost of changing all the everythings to support multi-job-in-project. Re: devs deploying different versions of k8s to the same project, P3 feature request?\nI was just about to suggest the same thing. I will file a separate issue, and fix this in the short term an easier way. is out to make it easier to isolate a run to a project.\nAlso it seems like a reasonable thing (higher than P2) to want to deploy multiple clusters to the same project safely in prod, not just dev. Disagree?\nMultiple clusters in prod using well-defined build versions such as v1.1.4 = P0. Multiple clusters in prod using our cooked-up test-building system = non-goal?\nSGTM; this is prod code, though, not test code; it's used every time we call , regardless of whether it's a CI build or released version.\nFYI: kubernetes-e2e-gce-reboot/ has one reboot test failed due to kube-proxy image change. But that build doesn't include pr yet.\nlooks like it. That run used version , but the first build with was .", "positive_passages": [{"docid": "doc-en-kubernetes-5faed00eba9de025efb411dc4db0f452e8c9be1840c5183c9253147a0efde608", "text": "${GCE_SLOW_TESTS[@]:+${GCE_SLOW_TESTS[@]}} )\"} : ${KUBE_GCE_INSTANCE_PREFIX:=\"e2e-gce-${NODE_NAME}-${EXECUTOR_NUMBER}\"} : ${KUBE_GCS_STAGING_PATH_SUFFIX:=\"-${NODE_NAME}-${EXECUTOR_NUMBER}\"} : ${PROJECT:=\"kubernetes-jenkins-pull\"} : ${ENABLE_DEPLOYMENTS:=true} # Override GCE defaults", "commid": "kubernetes_pr_20036"}], "negative_passages": []} {"query_id": "q-en-kubernetes-5cad7f0b9d7423b15ef1aff818cb7d7539e6f84907a939c490fb9042ee93c984", "query": "We seem to have an occasional consistency issue between the client and server versions on our Jenkins GCE e2e project. Build 6295 failed this morning because the client version was newer than the server version: The server version matches the build was was immediately running in 6294, which was aborted early. I wonder if (somehow) the cleanup didn't work properly? (Which is a bit worrying if so.) Here's the version from 6294: A short while later, build 6297 failed, this time with the client version older than the server version: This one makes a little bit more sense: there was a new build published between downloading the client/test tarballs and setting up the cluster. I'm guessing that somewhere in the cluster setup scripts the latest CI release file was read again, rather than using the version read for the client/test tarballs. (I haven't yet worked out where exactly it's read, but it should be an easier fix.) cc\nA side issue: we should probably figure out why the build git client is being marked dirty. Since we're checking out the head version from GitHub, it should ideally be a clean client.\nThe -dirty bit is now fixed by .\nThis is caused by different Jenkins jobs uploading to the same GCS bucket and clobbering each other. In we see: Over in\nThis also explains the issues seen in ; the job is also running in the same project.\nAh, good sleuthing. We avoided this issue in the PR builds by setting (used ), but maybe we should always be setting to include the Jenkins job name (and optionally executor/node number).\nThe original idea was to run every e2e job in it's own project, to avoid these sorts of name clashes. But we seem to have regressed and now have many jobs sharing the same projects. Is project-per-job not a better approach? It also helps with resource leaks and the likes. Just a thought.\nI'd argue this is actually uncovering a (dev) deployment bug; I should be able to deploy different versions of Kubernetes to the same project. I know you and I disagree (and probably always will) on the strategy here. :smile:\nfwiw I think I agree with that if project-per-job avoids this issue and helps identify resource leaks, it's a pragmatic near-term improvement in the state of the world that would result in more benefit than the cost of changing all the everythings to support multi-job-in-project. Re: devs deploying different versions of k8s to the same project, P3 feature request?\nI was just about to suggest the same thing. I will file a separate issue, and fix this in the short term an easier way. is out to make it easier to isolate a run to a project.\nAlso it seems like a reasonable thing (higher than P2) to want to deploy multiple clusters to the same project safely in prod, not just dev. Disagree?\nMultiple clusters in prod using well-defined build versions such as v1.1.4 = P0. Multiple clusters in prod using our cooked-up test-building system = non-goal?\nSGTM; this is prod code, though, not test code; it's used every time we call , regardless of whether it's a CI build or released version.\nFYI: kubernetes-e2e-gce-reboot/ has one reboot test failed due to kube-proxy image change. But that build doesn't include pr yet.\nlooks like it. That run used version , but the first build with was .", "positive_passages": [{"docid": "doc-en-kubernetes-2ec150b3d8333d0433b816cb43a5ff4257e730671431ce9d6b75f5ccaa2a7e42", "text": "export KUBE_GCE_ZONE=${E2E_ZONE} export KUBE_GCE_NETWORK=${E2E_NETWORK} export KUBE_GCE_INSTANCE_PREFIX=${KUBE_GCE_INSTANCE_PREFIX:-} export KUBE_GCS_STAGING_PATH_SUFFIX=${KUBE_GCS_STAGING_PATH_SUFFIX:-} export KUBE_GCE_NODE_PROJECT=${KUBE_GCE_NODE_PROJECT:-} export KUBE_GCE_NODE_IMAGE=${KUBE_GCE_NODE_IMAGE:-} export KUBE_OS_DISTRIBUTION=${KUBE_OS_DISTRIBUTION:-}", "commid": "kubernetes_pr_20036"}], "negative_passages": []} {"query_id": "q-en-kubernetes-3aad09bd638f4d0e6738065aebd7ae97fbb23d73b8dd9d1472c634994380848d", "query": "We need to be squelching the auth variables on upgrade anyways, but this complaint gets issued (and it proceeds anyways): cc", "positive_passages": [{"docid": "doc-en-kubernetes-a9eafe3272debeff63d61e53045a3064909a24dfb61be6506741a494fab9a116", "text": "KUBE_PROXY_TOKEN: $(yaml-quote ${KUBE_PROXY_TOKEN:-}) ADMISSION_CONTROL: $(yaml-quote ${ADMISSION_CONTROL:-}) MASTER_IP_RANGE: $(yaml-quote ${MASTER_IP_RANGE}) CA_CERT: $(yaml-quote ${CA_CERT_BASE64}) CA_CERT: $(yaml-quote ${CA_CERT_BASE64:-}) EOF if [[ \"${master}\" == \"true\" ]]; then", "commid": "kubernetes_pr_9074"}], "negative_passages": []} {"query_id": "q-en-kubernetes-661345ebb8c9ce30f7455d4bf53da418af157a4bd40a657033e168d627bd4150", "query": "In case of cluster crash and status recovery, the persistentVolumeClaimBinder would not re-add available volumes to its internal index. Users would not be able to claim those volumes.\nThis is really bad. I think we should just nix Pending phase and always consider every Available volume for indexing - map lookups are cheap.", "positive_passages": [{"docid": "doc-en-kubernetes-b3893d0cd50188792747a411e44b1be2c928909f6df0acc3db87c34d993db92f", "text": "// available volumes await a claim case api.VolumeAvailable: // TODO: remove api.VolumePending phase altogether _, exists, err := volumeIndex.Get(volume) if err != nil { return err } if !exists { volumeIndex.Add(volume) } if volume.Spec.ClaimRef != nil { _, err := binderClient.GetPersistentVolumeClaim(volume.Spec.ClaimRef.Namespace, volume.Spec.ClaimRef.Name) if err == nil {", "commid": "kubernetes_pr_9282"}], "negative_passages": []} {"query_id": "q-en-kubernetes-661345ebb8c9ce30f7455d4bf53da418af157a4bd40a657033e168d627bd4150", "query": "In case of cluster crash and status recovery, the persistentVolumeClaimBinder would not re-add available volumes to its internal index. Users would not be able to claim those volumes.\nThis is really bad. I think we should just nix Pending phase and always consider every Available volume for indexing - map lookups are cheap.", "positive_passages": [{"docid": "doc-en-kubernetes-46da62bf3382b5305f544e79819f393501c0e121186ec9d1d36b19c5d1036f96", "text": "} } func TestMissingFromIndex(t *testing.T) { api.ForTesting_ReferencesAllowBlankSelfLinks = true o := testclient.NewObjects(api.Scheme, api.Scheme) if err := testclient.AddObjectsFromPath(\"../../examples/persistent-volumes/claims/claim-01.yaml\", o, api.Scheme); err != nil { t.Fatal(err) } if err := testclient.AddObjectsFromPath(\"../../examples/persistent-volumes/volumes/local-01.yaml\", o, api.Scheme); err != nil { t.Fatal(err) } client := &testclient.Fake{ReactFn: testclient.ObjectReaction(o, latest.RESTMapper)} pv, err := client.PersistentVolumes().Get(\"any\") if err != nil { t.Error(\"Unexpected error getting PV from client: %v\", err) } claim, error := client.PersistentVolumeClaims(\"ns\").Get(\"any\") if error != nil { t.Errorf(\"Unexpected error getting PVC from client: %v\", err) } volumeIndex := NewPersistentVolumeOrderedIndex() mockClient := &mockBinderClient{ volume: pv, claim: claim, } // the default value of the PV is Pending. // if has previously been processed by the binder, it's status in etcd would be Available. // Only Pending volumes were being indexed and made ready for claims. pv.Status.Phase = api.VolumeAvailable // adds the volume to the index, making the volume available syncVolume(volumeIndex, mockClient, pv) if pv.Status.Phase != api.VolumeAvailable { t.Errorf(\"Expected phase %s but got %s\", api.VolumeBound, pv.Status.Phase) } // an initial sync for a claim will bind it to an unbound volume, triggers state change err = syncClaim(volumeIndex, mockClient, claim) if err != nil { t.Fatalf(\"Expected Clam to be bound, instead got an error: %+vn\", err) } // state change causes another syncClaim to update statuses syncClaim(volumeIndex, mockClient, claim) // claim updated volume's status, causing an update and syncVolume call syncVolume(volumeIndex, mockClient, pv) if pv.Spec.ClaimRef == nil { t.Errorf(\"Expected ClaimRef but got nil for pv.Status.ClaimRef: %+vn\", pv) } if pv.Status.Phase != api.VolumeBound { t.Errorf(\"Expected phase %s but got %s\", api.VolumeBound, pv.Status.Phase) } if claim.Status.Phase != api.ClaimBound { t.Errorf(\"Expected phase %s but got %s\", api.ClaimBound, claim.Status.Phase) } if len(claim.Status.AccessModes) != len(pv.Spec.AccessModes) { t.Errorf(\"Expected phase %s but got %s\", api.ClaimBound, claim.Status.Phase) } if claim.Status.AccessModes[0] != pv.Spec.AccessModes[0] { t.Errorf(\"Expected access mode %s but got %s\", claim.Status.AccessModes[0], pv.Spec.AccessModes[0]) } // pretend the user deleted their claim mockClient.claim = nil syncVolume(volumeIndex, mockClient, pv) if pv.Status.Phase != api.VolumeReleased { t.Errorf(\"Expected phase %s but got %s\", api.VolumeReleased, pv.Status.Phase) } } type mockBinderClient struct { volume *api.PersistentVolume claim *api.PersistentVolumeClaim", "commid": "kubernetes_pr_9282"}], "negative_passages": []} {"query_id": "q-en-kubernetes-a52b59c7d2fad26006f4e403c6ccd7d769148b1428731cb4cfa9b4cdf94eb1c4", "query": "The \"\" E2E test will sometimes fail because it fails to delete the PD used in the test after the test is finished. The error: Detailed test log: The disk that fails to delete still shows up in : My attempts to manually try to delete the PD fail with: (the work around for ) doesn't change anything here:\nis the disk still mounted ? It looks like the detach call did not succeed for some reason\nI'll have to get another repro and verify.\nokay cool. If you get a repro check to see if the disk is still mounted. If so then the cleanup code is not being called. If not then the clean up started, detach was called but failed for some reason; look in the logs for something like \"googleapi: some error about detaching\". It can also just be a race condition. That is, detach was called but is taking too long; if the disk does not detach then you cannot delete it.", "positive_passages": [{"docid": "doc-en-kubernetes-c525715b0238f5e5f4f523c2822e7128591ff26a4699b7f0b4ff011c4034daa1", "text": " /* Copyright 2015 The Kubernetes Authors All rights reserved. Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ package operationmanager import ( \"fmt\" \"sync\" ) // Operation Manager is a thread-safe interface for keeping track of multiple pending async operations. type OperationManager interface { // Called when the operation with the given ID has started. // Creates a new channel with specified buffer size tracked with the specified ID. // Returns a read-only version of the newly created channel. // Returns an error if an entry with the specified ID already exists (previous entry must be removed by calling Close). Start(id string, bufferSize uint) (<-chan interface{}, error) // Called when the operation with the given ID has terminated. // Closes and removes the channel associated with ID. // Returns an error if no associated channel exists. Close(id string) error // Attempts to send msg to the channel associated with ID. // Returns an error if no associated channel exists. Send(id string, msg interface{}) error } // Returns a new instance of a channel manager. func NewOperationManager() OperationManager { return &operationManager{ chanMap: make(map[string]chan interface{}), } } type operationManager struct { sync.RWMutex chanMap map[string]chan interface{} } // Called when the operation with the given ID has started. // Creates a new channel with specified buffer size tracked with the specified ID. // Returns a read-only version of the newly created channel. // Returns an error if an entry with the specified ID already exists (previous entry must be removed by calling Close). func (cm *operationManager) Start(id string, bufferSize uint) (<-chan interface{}, error) { cm.Lock() defer cm.Unlock() if _, exists := cm.chanMap[id]; exists { return nil, fmt.Errorf(\"id %q already exists\", id) } cm.chanMap[id] = make(chan interface{}, bufferSize) return cm.chanMap[id], nil } // Called when the operation with the given ID has terminated. // Closes and removes the channel associated with ID. // Returns an error if no associated channel exists. func (cm *operationManager) Close(id string) error { cm.Lock() defer cm.Unlock() if _, exists := cm.chanMap[id]; !exists { return fmt.Errorf(\"id %q not found\", id) } close(cm.chanMap[id]) delete(cm.chanMap, id) return nil } // Attempts to send msg to the channel associated with ID. // Returns an error if no associated channel exists. func (cm *operationManager) Send(id string, msg interface{}) error { cm.RLock() defer cm.RUnlock() if _, exists := cm.chanMap[id]; !exists { return fmt.Errorf(\"id %q not found\", id) } cm.chanMap[id] <- msg return nil } ", "commid": "kubernetes_pr_10169"}], "negative_passages": []} {"query_id": "q-en-kubernetes-a52b59c7d2fad26006f4e403c6ccd7d769148b1428731cb4cfa9b4cdf94eb1c4", "query": "The \"\" E2E test will sometimes fail because it fails to delete the PD used in the test after the test is finished. The error: Detailed test log: The disk that fails to delete still shows up in : My attempts to manually try to delete the PD fail with: (the work around for ) doesn't change anything here:\nis the disk still mounted ? It looks like the detach call did not succeed for some reason\nI'll have to get another repro and verify.\nokay cool. If you get a repro check to see if the disk is still mounted. If so then the cleanup code is not being called. If not then the clean up started, detach was called but failed for some reason; look in the logs for something like \"googleapi: some error about detaching\". It can also just be a race condition. That is, detach was called but is taking too long; if the disk does not detach then you cannot delete it.", "positive_passages": [{"docid": "doc-en-kubernetes-a4a883cf3b183ce3156589275caf96982d8119e388734cc9eceb23949671d1d2", "text": " /* Copyright 2015 The Kubernetes Authors All rights reserved. Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ // Channel Manager keeps track of multiple channels package operationmanager import ( \"testing\" ) func TestStart(t *testing.T) { // Arrange cm := NewOperationManager() chanId := \"testChanId\" testMsg := \"test message\" // Act ch, startErr := cm.Start(chanId, 1 /* bufferSize */) sigErr := cm.Send(chanId, testMsg) // Assert if startErr != nil { t.Fatalf(\"Unexpected error on Start. Expected: Actual: <%v>\", startErr) } if sigErr != nil { t.Fatalf(\"Unexpected error on Send. Expected: Actual: <%v>\", sigErr) } if actual := <-ch; actual != testMsg { t.Fatalf(\"Unexpected testMsg value. Expected: <%v> Actual: <%v>\", testMsg, actual) } } func TestStartIdExists(t *testing.T) { // Arrange cm := NewOperationManager() chanId := \"testChanId\" // Act _, startErr1 := cm.Start(chanId, 1 /* bufferSize */) _, startErr2 := cm.Start(chanId, 1 /* bufferSize */) // Assert if startErr1 != nil { t.Fatalf(\"Unexpected error on Start1. Expected: Actual: <%v>\", startErr1) } if startErr2 == nil { t.Fatalf(\"Expected error on Start2. Expected: Actual: \") } } func TestStartAndAdd2Chans(t *testing.T) { // Arrange cm := NewOperationManager() chanId1 := \"testChanId1\" chanId2 := \"testChanId2\" testMsg1 := \"test message 1\" testMsg2 := \"test message 2\" // Act ch1, startErr1 := cm.Start(chanId1, 1 /* bufferSize */) ch2, startErr2 := cm.Start(chanId2, 1 /* bufferSize */) sigErr1 := cm.Send(chanId1, testMsg1) sigErr2 := cm.Send(chanId2, testMsg2) // Assert if startErr1 != nil { t.Fatalf(\"Unexpected error on Start1. Expected: Actual: <%v>\", startErr1) } if startErr2 != nil { t.Fatalf(\"Unexpected error on Start2. Expected: Actual: <%v>\", startErr2) } if sigErr1 != nil { t.Fatalf(\"Unexpected error on Send1. Expected: Actual: <%v>\", sigErr1) } if sigErr2 != nil { t.Fatalf(\"Unexpected error on Send2. Expected: Actual: <%v>\", sigErr2) } if actual := <-ch1; actual != testMsg1 { t.Fatalf(\"Unexpected testMsg value. Expected: <%v> Actual: <%v>\", testMsg1, actual) } if actual := <-ch2; actual != testMsg2 { t.Fatalf(\"Unexpected testMsg value. Expected: <%v> Actual: <%v>\", testMsg2, actual) } } func TestStartAndAdd2ChansAndClose(t *testing.T) { // Arrange cm := NewOperationManager() chanId1 := \"testChanId1\" chanId2 := \"testChanId2\" testMsg1 := \"test message 1\" testMsg2 := \"test message 2\" // Act ch1, startErr1 := cm.Start(chanId1, 1 /* bufferSize */) ch2, startErr2 := cm.Start(chanId2, 1 /* bufferSize */) sigErr1 := cm.Send(chanId1, testMsg1) sigErr2 := cm.Send(chanId2, testMsg2) cm.Close(chanId1) sigErr3 := cm.Send(chanId1, testMsg1) // Assert if startErr1 != nil { t.Fatalf(\"Unexpected error on Start1. Expected: Actual: <%v>\", startErr1) } if startErr2 != nil { t.Fatalf(\"Unexpected error on Start2. Expected: Actual: <%v>\", startErr2) } if sigErr1 != nil { t.Fatalf(\"Unexpected error on Send1. Expected: Actual: <%v>\", sigErr1) } if sigErr2 != nil { t.Fatalf(\"Unexpected error on Send2. Expected: Actual: <%v>\", sigErr2) } if sigErr3 == nil { t.Fatalf(\"Expected error on Send3. Expected: Actual: \", sigErr2) } if actual := <-ch1; actual != testMsg1 { t.Fatalf(\"Unexpected testMsg value. Expected: <%v> Actual: <%v>\", testMsg1, actual) } if actual := <-ch2; actual != testMsg2 { t.Fatalf(\"Unexpected testMsg value. Expected: <%v> Actual: <%v>\", testMsg2, actual) } } ", "commid": "kubernetes_pr_10169"}], "negative_passages": []} {"query_id": "q-en-kubernetes-a52b59c7d2fad26006f4e403c6ccd7d769148b1428731cb4cfa9b4cdf94eb1c4", "query": "The \"\" E2E test will sometimes fail because it fails to delete the PD used in the test after the test is finished. The error: Detailed test log: The disk that fails to delete still shows up in : My attempts to manually try to delete the PD fail with: (the work around for ) doesn't change anything here:\nis the disk still mounted ? It looks like the detach call did not succeed for some reason\nI'll have to get another repro and verify.\nokay cool. If you get a repro check to see if the disk is still mounted. If so then the cleanup code is not being called. If not then the clean up started, detach was called but failed for some reason; look in the logs for something like \"googleapi: some error about detaching\". It can also just be a race condition. That is, detach was called but is taking too long; if the disk does not detach then you cannot delete it.", "positive_passages": [{"docid": "doc-en-kubernetes-06c38035d3e3ad22def3ac553544caf143063d6448d863db6449f24ff2b9ce8f", "text": "package gce_pd import ( \"errors\" \"fmt\" \"os\" \"path\" \"path/filepath\" \"strings\" \"time\" \"github.com/GoogleCloudPlatform/kubernetes/pkg/cloudprovider\" \"github.com/GoogleCloudPlatform/kubernetes/pkg/cloudprovider/gce\" \"github.com/GoogleCloudPlatform/kubernetes/pkg/util\" \"github.com/GoogleCloudPlatform/kubernetes/pkg/util/exec\" \"github.com/GoogleCloudPlatform/kubernetes/pkg/util/mount\" \"github.com/GoogleCloudPlatform/kubernetes/pkg/util/operationmanager\" \"github.com/golang/glog\" ) const ( diskByIdPath = \"/dev/disk/by-id/\" diskGooglePrefix = \"google-\" diskScsiGooglePrefix = \"scsi-0Google_PersistentDisk_\" diskPartitionSuffix = \"-part\" diskSDPath = \"/dev/sd\" diskSDPattern = \"/dev/sd*\" maxChecks = 10 maxRetries = 10 checkSleepDuration = time.Second ) // Singleton operation manager for managing detach clean up go routines var detachCleanupManager = operationmanager.NewOperationManager() type GCEDiskUtil struct{} // Attaches a disk specified by a volume.GCEPersistentDisk to the current kubelet. // Mounts the disk to it's global path. func (util *GCEDiskUtil) AttachAndMountDisk(pd *gcePersistentDisk, globalPDPath string) error { func (diskUtil *GCEDiskUtil) AttachAndMountDisk(pd *gcePersistentDisk, globalPDPath string) error { glog.V(5).Infof(\"AttachAndMountDisk(pd, %q) where pd is %#vrn\", globalPDPath, pd) // Terminate any in progress verify detach go routines, this will block until the goroutine is ready to exit because the channel is unbuffered detachCleanupManager.Send(pd.pdName, true) sdBefore, err := filepath.Glob(diskSDPattern) if err != nil { glog.Errorf(\"Error filepath.Glob(\"%s\"): %vrn\", diskSDPattern, err) } sdBeforeSet := util.NewStringSet(sdBefore...) gce, err := cloudprovider.GetCloudProvider(\"gce\", nil) if err != nil { return err } if err := gce.(*gce_cloud.GCECloud).AttachDisk(pd.pdName, pd.readOnly); err != nil { return err } devicePaths := []string{ path.Join(\"/dev/disk/by-id/\", \"google-\"+pd.pdName), path.Join(\"/dev/disk/by-id/\", \"scsi-0Google_PersistentDisk_\"+pd.pdName), } if pd.partition != \"\" { for i, path := range devicePaths { devicePaths[i] = path + \"-part\" + pd.partition } } //TODO(jonesdl) There should probably be better method than busy-waiting here. numTries := 0 devicePath := \"\" // Wait for the disk device to be created for { for _, path := range devicePaths { _, err := os.Stat(path) if err == nil { devicePath = path break } if err != nil && !os.IsNotExist(err) { return err } } if devicePath != \"\" { break } numTries++ if numTries == 10 { return errors.New(\"Could not attach disk: Timeout after 10s\") } time.Sleep(time.Second) devicePath, err := verifyAttached(pd, sdBeforeSet, gce) if err != nil { return err } // Only mount the PD globally once.", "commid": "kubernetes_pr_10169"}], "negative_passages": []} {"query_id": "q-en-kubernetes-a52b59c7d2fad26006f4e403c6ccd7d769148b1428731cb4cfa9b4cdf94eb1c4", "query": "The \"\" E2E test will sometimes fail because it fails to delete the PD used in the test after the test is finished. The error: Detailed test log: The disk that fails to delete still shows up in : My attempts to manually try to delete the PD fail with: (the work around for ) doesn't change anything here:\nis the disk still mounted ? It looks like the detach call did not succeed for some reason\nI'll have to get another repro and verify.\nokay cool. If you get a repro check to see if the disk is still mounted. If so then the cleanup code is not being called. If not then the clean up started, detach was called but failed for some reason; look in the logs for something like \"googleapi: some error about detaching\". It can also just be a race condition. That is, detach was called but is taking too long; if the disk does not detach then you cannot delete it.", "positive_passages": [{"docid": "doc-en-kubernetes-eae09b24dc0853551cbcc46d33e90c73a064d72a720c7c95fa6a7e78de906d3e", "text": "func (util *GCEDiskUtil) DetachDisk(pd *gcePersistentDisk) error { // Unmount the global PD mount, which should be the only one. globalPDPath := makeGlobalPDName(pd.plugin.host, pd.pdName) glog.V(5).Infof(\"DetachDisk(pd) where pd is %#v and the globalPDPath is %qrn\", pd, globalPDPath) // Terminate any in progress verify detach go routines, this will block until the goroutine is ready to exit because the channel is unbuffered detachCleanupManager.Send(pd.pdName, true) if err := pd.mounter.Unmount(globalPDPath); err != nil { return err }", "commid": "kubernetes_pr_10169"}], "negative_passages": []} {"query_id": "q-en-kubernetes-a52b59c7d2fad26006f4e403c6ccd7d769148b1428731cb4cfa9b4cdf94eb1c4", "query": "The \"\" E2E test will sometimes fail because it fails to delete the PD used in the test after the test is finished. The error: Detailed test log: The disk that fails to delete still shows up in : My attempts to manually try to delete the PD fail with: (the work around for ) doesn't change anything here:\nis the disk still mounted ? It looks like the detach call did not succeed for some reason\nI'll have to get another repro and verify.\nokay cool. If you get a repro check to see if the disk is still mounted. If so then the cleanup code is not being called. If not then the clean up started, detach was called but failed for some reason; look in the logs for something like \"googleapi: some error about detaching\". It can also just be a race condition. That is, detach was called but is taking too long; if the disk does not detach then you cannot delete it.", "positive_passages": [{"docid": "doc-en-kubernetes-2fe6704755207fd584ed730e615faa8a9cb609d942f64ff07d10fb8e5fce4165", "text": "if err := gce.(*gce_cloud.GCECloud).DetachDisk(pd.pdName); err != nil { return err } // Verify disk detached, retry if needed. go verifyDetached(pd, gce) return nil } // Verifys the disk device to be created has been succesffully attached, and retries if it fails. func verifyAttached(pd *gcePersistentDisk, sdBeforeSet util.StringSet, gce cloudprovider.Interface) (string, error) { devicePaths := getDiskByIdPaths(pd) for numRetries := 0; numRetries < maxRetries; numRetries++ { for numChecks := 0; numChecks < maxChecks; numChecks++ { if err := udevadmChangeToNewDrives(sdBeforeSet); err != nil { // udevadm errors should not block disk attachment, log and continue glog.Errorf(\"%v\", err) } for _, path := range devicePaths { if pathExists, err := pathExists(path); err != nil { return \"\", err } else if pathExists { // A device path has succesfully been created for the PD glog.V(5).Infof(\"Succesfully attached GCE PD %q.\", pd.pdName) return path, nil } } // Sleep then check again glog.V(5).Infof(\"Waiting for GCE PD %q to attach.\", pd.pdName) time.Sleep(checkSleepDuration) } // Try attaching the disk again glog.Warningf(\"Timed out waiting for GCE PD %q to attach. Retrying attach.\", pd.pdName) if err := gce.(*gce_cloud.GCECloud).AttachDisk(pd.pdName, pd.readOnly); err != nil { return \"\", err } } return \"\", fmt.Errorf(\"Could not attach GCE PD %q. Timeout waiting for mount paths to be created.\", pd.pdName) } // Veify the specified persistent disk device has been succesfully detached, and retries if it fails. // This function is intended to be called asynchronously as a go routine. func verifyDetached(pd *gcePersistentDisk, gce cloudprovider.Interface) { defer util.HandleCrash() // Setting bufferSize to 0 so that when senders send, they are blocked until we recieve. This avoids the need to have a separate exit check. ch, err := detachCleanupManager.Start(pd.pdName, 0 /* bufferSize */) if err != nil { glog.Errorf(\"Error adding %q to detachCleanupManager: %v\", pd.pdName, err) return } defer detachCleanupManager.Close(pd.pdName) devicePaths := getDiskByIdPaths(pd) for numRetries := 0; numRetries < maxRetries; numRetries++ { for numChecks := 0; numChecks < maxChecks; numChecks++ { select { case <-ch: glog.Warningf(\"Terminating GCE PD %q detach verification. Another attach/detach call was made for this PD.\", pd.pdName) return default: allPathsRemoved := true for _, path := range devicePaths { if err := udevadmChangeToDrive(path); err != nil { // udevadm errors should not block disk detachment, log and continue glog.Errorf(\"%v\", err) } if exists, err := pathExists(path); err != nil { glog.Errorf(\"Error check path: %v\", err) return } else { allPathsRemoved = allPathsRemoved && !exists } } if allPathsRemoved { // All paths to the PD have been succefully removed glog.V(5).Infof(\"Succesfully detached GCE PD %q.\", pd.pdName) return } // Sleep then check again glog.V(5).Infof(\"Waiting for GCE PD %q to detach.\", pd.pdName) time.Sleep(checkSleepDuration) } } // Try detaching disk again glog.Warningf(\"Timed out waiting for GCE PD %q to detach. Retrying detach.\", pd.pdName) if err := gce.(*gce_cloud.GCECloud).DetachDisk(pd.pdName); err != nil { glog.Errorf(\"Error on retry detach PD %q: %v\", pd.pdName, err) return } } glog.Errorf(\"Could not detach GCE PD %q. One or more mount paths was not removed.\", pd.pdName) } // Returns list of all /dev/disk/by-id/* paths for given PD. func getDiskByIdPaths(pd *gcePersistentDisk) []string { devicePaths := []string{ path.Join(diskByIdPath, diskGooglePrefix+pd.pdName), path.Join(diskByIdPath, diskScsiGooglePrefix+pd.pdName), } if pd.partition != \"\" { for i, path := range devicePaths { devicePaths[i] = path + diskPartitionSuffix + pd.partition } } return devicePaths } // Checks if the specified path exists func pathExists(path string) (bool, error) { _, err := os.Stat(path) if err == nil { return true, nil } else if os.IsNotExist(err) { return false, nil } else { return false, err } } // Calls \"udevadm trigger --action=change\" for newly created \"/dev/sd*\" drives (exist only in after set). // This is workaround for Issue #7972. Once the underlying issue has been resolved, this may be removed. func udevadmChangeToNewDrives(sdBeforeSet util.StringSet) error { sdAfter, err := filepath.Glob(diskSDPattern) if err != nil { return fmt.Errorf(\"Error filepath.Glob(\"%s\"): %vrn\", diskSDPattern, err) } for _, sd := range sdAfter { if !sdBeforeSet.Has(sd) { return udevadmChangeToDrive(sd) } } return nil } // Calls \"udevadm trigger --action=change\" on the specified drive. // drivePath must be the the block device path to trigger on, in the format \"/dev/sd*\", or a symlink to it. // This is workaround for Issue #7972. Once the underlying issue has been resolved, this may be removed. func udevadmChangeToDrive(drivePath string) error { glog.V(5).Infof(\"udevadmChangeToDrive: drive=%q\", drivePath) // Evaluate symlink, if any drive, err := filepath.EvalSymlinks(drivePath) if err != nil { return fmt.Errorf(\"udevadmChangeToDrive: filepath.EvalSymlinks(%q) failed with %v.\", drivePath, err) } glog.V(5).Infof(\"udevadmChangeToDrive: symlink path is %q\", drive) // Check to make sure input is \"/dev/sd*\" if !strings.Contains(drive, diskSDPath) { return fmt.Errorf(\"udevadmChangeToDrive: expected input in the form \"%s\" but drive is %q.\", diskSDPattern, drive) } // Call \"udevadm trigger --action=change --property-match=DEVNAME=/dev/sd...\" _, err = exec.New().Command( \"udevadm\", \"trigger\", \"--action=change\", fmt.Sprintf(\"--property-match=DEVNAME=%s\", drive)).CombinedOutput() if err != nil { return fmt.Errorf(\"udevadmChangeToDrive: udevadm trigger failed for drive %q with %v.\", drive, err) } return nil }", "commid": "kubernetes_pr_10169"}], "negative_passages": []} {"query_id": "q-en-kubernetes-4a5f8a117915b8e2720ab3fe6bd6b6e9a9116553d4b98258a619803800705175", "query": "If the container_probe test runs on a node without the nginx image, it sometimes takes 90s to get the pod in running. Most of this time goes in pulling the image, so I'm assuming it's registry related hiccup. Using might make it more reliable. Example failure: Note the following event in the logs:\na) we should not be pulling from docker in our tests - too hit-or-miss b) we should replace nginx (132 MB) with (4.5 MB) whenever possible. , Prashanth B wrote:", "positive_passages": [{"docid": "doc-en-kubernetes-02394522b4c086c8ae1fbee571a0c1ec3554c23be7b74645e7bf5c28425ad8de", "text": "var _ = Describe(\"Probing container\", func() { framework := Framework{BaseName: \"container-probe\"} var podClient client.PodInterface probe := nginxProbeBuilder{} probe := webserverProbeBuilder{} BeforeEach(func() { framework.beforeEach()", "commid": "kubernetes_pr_10758"}], "negative_passages": []} {"query_id": "q-en-kubernetes-4a5f8a117915b8e2720ab3fe6bd6b6e9a9116553d4b98258a619803800705175", "query": "If the container_probe test runs on a node without the nginx image, it sometimes takes 90s to get the pod in running. Most of this time goes in pulling the image, so I'm assuming it's registry related hiccup. Using might make it more reliable. Example failure: Note the following event in the logs:\na) we should not be pulling from docker in our tests - too hit-or-miss b) we should replace nginx (132 MB) with (4.5 MB) whenever possible. , Prashanth B wrote:", "positive_passages": [{"docid": "doc-en-kubernetes-709c2e2d7d4c07c9e81c5ed03d075ca9967615f17e770dedbd88ac171d494193", "text": "expectNoError(err) startTime := time.Now() expectNoError(wait.Poll(poll, 90*time.Second, func() (bool, error) { Expect(wait.Poll(poll, 90*time.Second, func() (bool, error) { p, err := podClient.Get(p.Name) if err != nil { return false, err } return api.IsPodReady(p), nil })) ready := api.IsPodReady(p) if !ready { Logf(\"pod is not yet ready; pod has phase %q.\", p.Status.Phase) return false, nil } return true, nil })).NotTo(HaveOccurred(), \"pod never became ready\") if time.Since(startTime) < 30*time.Second { Failf(\"Pod became ready before it's initial delay\")", "commid": "kubernetes_pr_10758"}], "negative_passages": []} {"query_id": "q-en-kubernetes-4a5f8a117915b8e2720ab3fe6bd6b6e9a9116553d4b98258a619803800705175", "query": "If the container_probe test runs on a node without the nginx image, it sometimes takes 90s to get the pod in running. Most of this time goes in pulling the image, so I'm assuming it's registry related hiccup. Using might make it more reliable. Example failure: Note the following event in the logs:\na) we should not be pulling from docker in our tests - too hit-or-miss b) we should replace nginx (132 MB) with (4.5 MB) whenever possible. , Prashanth B wrote:", "positive_passages": [{"docid": "doc-en-kubernetes-bdbd229e761a48d32c7cc732b77efd50ac5b1371ac849096c360ef4df6abce2e", "text": "isReady, err := podRunningReady(p) expectNoError(err) Expect(isReady).To(BeTrue()) Expect(isReady).To(BeTrue(), \"pod should be ready\") Expect(getRestartCount(p) == 0).To(BeTrue()) restartCount := getRestartCount(p) Expect(restartCount == 0).To(BeTrue(), \"pod should have a restart count of 0 but got %v\", restartCount) }) It(\"with readiness probe that fails should never be ready and never restart\", func() {", "commid": "kubernetes_pr_10758"}], "negative_passages": []} {"query_id": "q-en-kubernetes-4a5f8a117915b8e2720ab3fe6bd6b6e9a9116553d4b98258a619803800705175", "query": "If the container_probe test runs on a node without the nginx image, it sometimes takes 90s to get the pod in running. Most of this time goes in pulling the image, so I'm assuming it's registry related hiccup. Using might make it more reliable. Example failure: Note the following event in the logs:\na) we should not be pulling from docker in our tests - too hit-or-miss b) we should replace nginx (132 MB) with (4.5 MB) whenever possible. , Prashanth B wrote:", "positive_passages": [{"docid": "doc-en-kubernetes-3f5411c63aee66c723080ba3a7cc71d4579cac651f48bdca6fd8fbae3a536c47", "text": "expectNoError(err) isReady, err := podRunningReady(p) Expect(isReady).NotTo(BeTrue()) Expect(isReady).NotTo(BeTrue(), \"pod should be not ready\") Expect(getRestartCount(p) == 0).To(BeTrue()) restartCount := getRestartCount(p) Expect(restartCount == 0).To(BeTrue(), \"pod should have a restart count of 0 but got %v\", restartCount) }) })", "commid": "kubernetes_pr_10758"}], "negative_passages": []} {"query_id": "q-en-kubernetes-4a5f8a117915b8e2720ab3fe6bd6b6e9a9116553d4b98258a619803800705175", "query": "If the container_probe test runs on a node without the nginx image, it sometimes takes 90s to get the pod in running. Most of this time goes in pulling the image, so I'm assuming it's registry related hiccup. Using might make it more reliable. Example failure: Note the following event in the logs:\na) we should not be pulling from docker in our tests - too hit-or-miss b) we should replace nginx (132 MB) with (4.5 MB) whenever possible. , Prashanth B wrote:", "positive_passages": [{"docid": "doc-en-kubernetes-32f61a2f2c222e316f9166ad574f362eb0411d2a666c4ad79a9d2c7cf1e8b01f", "text": "func makePodSpec(readinessProbe, livenessProbe *api.Probe) *api.Pod { pod := &api.Pod{ ObjectMeta: api.ObjectMeta{Name: \"nginx-\" + string(util.NewUUID())}, ObjectMeta: api.ObjectMeta{Name: \"test-webserver-\" + string(util.NewUUID())}, Spec: api.PodSpec{ Containers: []api.Container{ { Name: \"nginx\", Image: \"nginx\", Name: \"test-webserver\", Image: \"gcr.io/google_containers/test-webserver\", LivenessProbe: livenessProbe, ReadinessProbe: readinessProbe, },", "commid": "kubernetes_pr_10758"}], "negative_passages": []} {"query_id": "q-en-kubernetes-4a5f8a117915b8e2720ab3fe6bd6b6e9a9116553d4b98258a619803800705175", "query": "If the container_probe test runs on a node without the nginx image, it sometimes takes 90s to get the pod in running. Most of this time goes in pulling the image, so I'm assuming it's registry related hiccup. Using might make it more reliable. Example failure: Note the following event in the logs:\na) we should not be pulling from docker in our tests - too hit-or-miss b) we should replace nginx (132 MB) with (4.5 MB) whenever possible. , Prashanth B wrote:", "positive_passages": [{"docid": "doc-en-kubernetes-86eff17049f2f0326241622643b4c947d261ff0eef5c6961b0dc37945723c667", "text": "return pod } type nginxProbeBuilder struct { type webserverProbeBuilder struct { failing bool initialDelay bool } func (b nginxProbeBuilder) withFailing() nginxProbeBuilder { func (b webserverProbeBuilder) withFailing() webserverProbeBuilder { b.failing = true return b } func (b nginxProbeBuilder) withInitialDelay() nginxProbeBuilder { func (b webserverProbeBuilder) withInitialDelay() webserverProbeBuilder { b.initialDelay = true return b } func (b nginxProbeBuilder) build() *api.Probe { func (b webserverProbeBuilder) build() *api.Probe { probe := &api.Probe{ Handler: api.Handler{ HTTPGet: &api.HTTPGetAction{", "commid": "kubernetes_pr_10758"}], "negative_passages": []} {"query_id": "q-en-kubernetes-e173b509db75d449d82cb27f028365138ce8184a0f33391f61ddb69043f21d65", "query": "I'm conducting PoC about distributed build using Kubernetes. Jenkins master distribute jenkins slave pods through Kubernetes API Server. When build is done, jenkins slave pod is also deleted by kubernetes API Server. But, kubernetes didn't delete container. (Pod is deleted well) After searching, I found kubelet\u2019s containerGC option. So, I adjusted below options to kubelet. --minimum-container-ttl-duration=0s --maximum-dead-containers=0 After adjusting both 2 options, container was deleted, too. But, kubernetes containerGC didn\u2019t delete host volume path. (/var/lib/docker/vfs/dir/..) Due to aufs\u2019s low performance, I used data volume of host instead of container inside. So, data volume was accumulated and host's inodes was fulled, too. Is there any solution to clean the containers wreckage? My environment was.. Docker: v1.6.2 Kubernetes: v0.18.2 (container based multi clustering)\nare you using the host volumes you created yourself? If this is the case, kubelet does not clean the volumes since they are external to kubelet. If not, kubelet should clean the volumes before container GC kicks in.\nNo.. volumes were created by kubenetes API server. LGTM. Will it be adjusted next release?\nYou said \"I used data volume of host instead of container inside\" which says you used a hostPath mount - correct? That is explicitly NOT managed by kubernetes - you've gone to the host, you get to manage it. But you also said \"Due to aufs\u2019s low performance\" - an emptyDir volume is NOT stored on aufs, but it IS managed with the pod. , gghonor wrote:\nThe issue as i understand it is dangling volumes, that is volumes that are created via a VOLUME directive in Dockerfile with no corresponding kubernetes volume in spec which therefore has no host path & is not cleaned up by the kubelet. These volumes reside in /var/lib/docker/vfs/. This waste of disk space has the potential to cause out of space errors on the node & therefore make the node unstable.\nAs the volume dir you mentioned is I assume you didn't specify any volume in your Kubernetes pod spec but did use the directive in your Dockerfile? As mentions I guess this isn't a Kubernetes managed volume as that would be in the Kubernetes volume dir, right? That dir is used by Docker for volumes specified in Dockerfiles or a docker container that is run with with no specified host path. The latter doesn't happen with Kubernetes managed containers AFAIK.\nThis makes sense. Thanks for the explanation :) I see the point of removing the dangling volumes. Even though kubelet doesn't manage those volumes, it doesn't hurt to remove them.\nDo we have a full understanding of their lifetime, though? We don't control them, I don't see how we can know when to delete them. Docker's volume implementation is just too under-defined to actually be useful. , Yu-Ju Hong wrote:", "positive_passages": [{"docid": "doc-en-kubernetes-2af6a3a28337c77b2015a7dc3e4e63318a903b941f29da457b8d0ee76ae23e03", "text": "// Remove unidentified containers. for _, container := range unidentifiedContainers { glog.Infof(\"Removing unidentified dead container %q with ID %q\", container.name, container.id) err = cgc.dockerClient.RemoveContainer(docker.RemoveContainerOptions{ID: container.id}) err = cgc.dockerClient.RemoveContainer(docker.RemoveContainerOptions{ID: container.id, RemoveVolumes: true}) if err != nil { glog.Warningf(\"Failed to remove unidentified dead container %q: %v\", container.name, err) }", "commid": "kubernetes_pr_10821"}], "negative_passages": []} {"query_id": "q-en-kubernetes-e173b509db75d449d82cb27f028365138ce8184a0f33391f61ddb69043f21d65", "query": "I'm conducting PoC about distributed build using Kubernetes. Jenkins master distribute jenkins slave pods through Kubernetes API Server. When build is done, jenkins slave pod is also deleted by kubernetes API Server. But, kubernetes didn't delete container. (Pod is deleted well) After searching, I found kubelet\u2019s containerGC option. So, I adjusted below options to kubelet. --minimum-container-ttl-duration=0s --maximum-dead-containers=0 After adjusting both 2 options, container was deleted, too. But, kubernetes containerGC didn\u2019t delete host volume path. (/var/lib/docker/vfs/dir/..) Due to aufs\u2019s low performance, I used data volume of host instead of container inside. So, data volume was accumulated and host's inodes was fulled, too. Is there any solution to clean the containers wreckage? My environment was.. Docker: v1.6.2 Kubernetes: v0.18.2 (container based multi clustering)\nare you using the host volumes you created yourself? If this is the case, kubelet does not clean the volumes since they are external to kubelet. If not, kubelet should clean the volumes before container GC kicks in.\nNo.. volumes were created by kubenetes API server. LGTM. Will it be adjusted next release?\nYou said \"I used data volume of host instead of container inside\" which says you used a hostPath mount - correct? That is explicitly NOT managed by kubernetes - you've gone to the host, you get to manage it. But you also said \"Due to aufs\u2019s low performance\" - an emptyDir volume is NOT stored on aufs, but it IS managed with the pod. , gghonor wrote:\nThe issue as i understand it is dangling volumes, that is volumes that are created via a VOLUME directive in Dockerfile with no corresponding kubernetes volume in spec which therefore has no host path & is not cleaned up by the kubelet. These volumes reside in /var/lib/docker/vfs/. This waste of disk space has the potential to cause out of space errors on the node & therefore make the node unstable.\nAs the volume dir you mentioned is I assume you didn't specify any volume in your Kubernetes pod spec but did use the directive in your Dockerfile? As mentions I guess this isn't a Kubernetes managed volume as that would be in the Kubernetes volume dir, right? That dir is used by Docker for volumes specified in Dockerfiles or a docker container that is run with with no specified host path. The latter doesn't happen with Kubernetes managed containers AFAIK.\nThis makes sense. Thanks for the explanation :) I see the point of removing the dangling volumes. Even though kubelet doesn't manage those volumes, it doesn't hurt to remove them.\nDo we have a full understanding of their lifetime, though? We don't control them, I don't see how we can know when to delete them. Docker's volume implementation is just too under-defined to actually be useful. , Yu-Ju Hong wrote:", "positive_passages": [{"docid": "doc-en-kubernetes-a7bbe657281ccfb73f86bf819ae876fbfa00a9554e6de8a8edde6299c9159ad8", "text": "// Remove from oldest to newest (last to first). numToKeep := len(containers) - toRemove for i := numToKeep; i < len(containers); i++ { err := cgc.dockerClient.RemoveContainer(docker.RemoveContainerOptions{ID: containers[i].id}) err := cgc.dockerClient.RemoveContainer(docker.RemoveContainerOptions{ID: containers[i].id, RemoveVolumes: true}) if err != nil { glog.Warningf(\"Failed to remove dead container %q: %v\", containers[i].name, err) }", "commid": "kubernetes_pr_10821"}], "negative_passages": []} {"query_id": "q-en-kubernetes-5e141256b52005c28fd1b24c7f05dfac8f617bee1bc04c7a011b4ab9fa62a6c6", "query": "The file: has instructions for AWS, GCE, and VMware. I'd like to see documentation for OpenStack as well. I'd be happy to do the work for this as I don't think it would take too much time. I'll add some instructions to this issue if its something that people think would be helpful.\nI would greatly appreciate any work you can do with this. I've been trying, unsuccessfully, to get the OpenStack integration working.\nOk, I created instructions in this gist: If it looks good to you I'll create a pull request and merge changes in the document after the GCE section (as it appears we're going in Alphabetical order? )\nSending a PR sounds good.", "positive_passages": [{"docid": "doc-en-kubernetes-95dd5f33db85c896c5e60ea2b5b219565cd75e89bd3e2b2246ffdcfa2ef6db7f", "text": "In one terminal, run `gcloud compute ssh master --ssh-flag=\"-L 8080:127.0.0.1:8080\"` and in a second run `gcloud compute ssh master --ssh-flag=\"-R 8080:127.0.0.1:8080\"`. ### OpenStack These instructions are for running on the command line. Most of this you can also do through the Horizon dashboard. These instructions were tested on the Ice House release on a Metacloud distribution of OpenStack but should be similar if not the same across other versions/distributions of OpenStack. #### Make sure you can connect with OpenStack Make sure the environment variables are set for OpenStack such as: ```sh OS_TENANT_ID OS_PASSWORD OS_AUTH_URL OS_USERNAME OS_TENANT_NAME ``` Test this works with something like: ``` nova list ``` #### Get a Suitable CoreOS Image You'll need a [suitable version of CoreOS image for OpenStack] (https://coreos.com/os/docs/latest/booting-on-openstack.html) Once you download that, upload it to glance. An example is shown below: ```sh glance image-create --name CoreOS723 --container-format bare --disk-format qcow2 --file coreos_production_openstack_image.img --is-public True ``` #### Create security group ```sh nova secgroup-create kubernetes \"Kubernetes Security Group\" nova secgroup-add-rule kubernetes tcp 22 22 0.0.0.0/0 nova secgroup-add-rule kubernetes tcp 80 80 0.0.0.0/0 ``` #### Provision the Master ```sh nova boot --image --key-name --flavor --security-group kubernetes --user-data files/master.yaml kube-master ``` `````` is the CoreOS image name. In our example we can use the image we created in the previous step and put in 'CoreOS723' `````` is the keypair name that you already generated to access the instance. `````` is the flavor ID you use to size the instance. Run ```nova flavor-list``` to get the IDs. 3 on the system this was tested with gives the m1.large size. The important part is to ensure you have the files/master.yml as this is what will do all the post boot configuration. This path is relevant so we are assuming in this example that you are running the nova command in a directory where there is a subdirectory called files that has the master.yml file in it. Absolute paths also work. Next, assign it a public IP address: ``` nova floating-ip-list ``` Get an IP address that's free and run: ``` nova floating-ip-associate kube-master ``` where `````` is the IP address that was available from the ```nova floating-ip-list``` command. #### Provision Worker Nodes Edit ```node.yaml``` and replace all instances of `````` with the private IP address of the master node. You can get this by runnning ```nova show kube-master``` assuming you named your instance kube master. This is not the floating IP address you just assigned it. ```sh nova boot --image --key-name --flavor --security-group kubernetes --user-data files/node.yaml minion01 ``` This is basically the same as the master nodes but with the node.yaml post-boot script instead of the master. ### VMware Fusion #### Create the master config-drive", "commid": "kubernetes_pr_12656"}], "negative_passages": []} {"query_id": "q-en-kubernetes-0899495702d6e9ebf67f2fa89ae62937548eeddaa12f77ce123a9a812c22bf72", "query": "If is passed to the kubelet, container metrics are not included in the Prometheus metrics endpoint.", "positive_passages": [{"docid": "doc-en-kubernetes-bff761ca084cc59b8560d6888874fcb466797d1e3d96658bcf5f6917aa08cebd", "text": "Manager: m, } // Export the HTTP endpoint if a port was specified. if port > 0 { err = cadvisorClient.exportHTTP(port) if err != nil { return nil, err } err = cadvisorClient.exportHTTP(port) if err != nil { return nil, err } return cadvisorClient, nil }", "commid": "kubernetes_pr_13202"}], "negative_passages": []} {"query_id": "q-en-kubernetes-0899495702d6e9ebf67f2fa89ae62937548eeddaa12f77ce123a9a812c22bf72", "query": "If is passed to the kubelet, container metrics are not included in the Prometheus metrics endpoint.", "positive_passages": [{"docid": "doc-en-kubernetes-bef7a0a0b9d6383f980516f985adc985a60d2fc3003a029bc2b4894f4315115d", "text": "} func (cc *cadvisorClient) exportHTTP(port uint) error { // Register the handlers regardless as this registers the prometheus // collector properly. mux := http.NewServeMux() err := cadvisorHttp.RegisterHandlers(mux, cc, \"\", \"\", \"\", \"\", \"/metrics\") if err != nil { return err } serv := &http.Server{ Addr: fmt.Sprintf(\":%d\", port), Handler: mux, } // TODO(vmarmol): Remove this when the cAdvisor port is once again free. // If export failed, retry in the background until we are able to bind. // This allows an existing cAdvisor to be killed before this one registers. go func() { defer util.HandleCrash() err := serv.ListenAndServe() for err != nil { glog.Infof(\"Failed to register cAdvisor on port %d, retrying. Error: %v\", port, err) time.Sleep(time.Minute) err = serv.ListenAndServe() // Only start the http server if port > 0 if port > 0 { serv := &http.Server{ Addr: fmt.Sprintf(\":%d\", port), Handler: mux, } }() // TODO(vmarmol): Remove this when the cAdvisor port is once again free. // If export failed, retry in the background until we are able to bind. // This allows an existing cAdvisor to be killed before this one registers. go func() { defer util.HandleCrash() err := serv.ListenAndServe() for err != nil { glog.Infof(\"Failed to register cAdvisor on port %d, retrying. Error: %v\", port, err) time.Sleep(time.Minute) err = serv.ListenAndServe() } }() } return nil }", "commid": "kubernetes_pr_13202"}], "negative_passages": []} {"query_id": "q-en-kubernetes-0f36d617029b75ecdfb03cc044087fba40537f8a434f7633b301501da951ce4d", "query": "The first line logged from the integration test is the last line from the service account token test is then the test panics with: I think the tests are just running too close the the timeout. On successful runs, the tests take ~115 seconds, which is ridiculously close to the threshold. For example: see\nok, do you want to send a PR to increase the timeout to 3 or 4 minutes?\nSure, I can do that tonight", "positive_passages": [{"docid": "doc-en-kubernetes-b041b68e4ba1cfd8eaaf694db2f71f1cfa035aa7d484f0061a53a9be00b03e82", "text": "# KUBE_TEST_API_VERSIONS=${KUBE_TEST_API_VERSIONS:-\"v1,experimental/v1alpha1\"} KUBE_TEST_API_VERSIONS=${KUBE_TEST_API_VERSIONS:-\"v1,experimental/v1alpha1\"} # Give integration tests longer to run KUBE_TIMEOUT=${KUBE_TIMEOUT:--timeout 240s} KUBE_INTEGRATION_TEST_MAX_CONCURRENCY=${KUBE_INTEGRATION_TEST_MAX_CONCURRENCY:-\"-1\"} LOG_LEVEL=${LOG_LEVEL:-2}", "commid": "kubernetes_pr_14600"}], "negative_passages": []} {"query_id": "q-en-kubernetes-0f36d617029b75ecdfb03cc044087fba40537f8a434f7633b301501da951ce4d", "query": "The first line logged from the integration test is the last line from the service account token test is then the test panics with: I think the tests are just running too close the the timeout. On successful runs, the tests take ~115 seconds, which is ridiculously close to the threshold. For example: see\nok, do you want to send a PR to increase the timeout to 3 or 4 minutes?\nSure, I can do that tonight", "positive_passages": [{"docid": "doc-en-kubernetes-8afe28bf0b2780d4439e6534f1cb824e93ac12596263051dbe57d53365747e98", "text": "# KUBE_RACE=\"-race\" KUBE_GOFLAGS=\"-tags 'integration no-docker' \" KUBE_RACE=\"\" KUBE_TIMEOUT=\"${KUBE_TIMEOUT}\" KUBE_TEST_API_VERSIONS=\"$1\" KUBE_API_VERSIONS=\"v1,experimental/v1alpha1\" \"${KUBE_ROOT}/hack/test-go.sh\" test/integration", "commid": "kubernetes_pr_14600"}], "negative_passages": []} {"query_id": "q-en-kubernetes-ae486ffe03a734aa443d42b073f2c6b23bd4d201c949d5702210dc739a9b466f", "query": "Nodes Network when a minion node becomes unreachable [replication controller] recreates pods scheduled on the unreachable minion node AND allows scheduling of pods on a minion after it rejoins the cluster(Failed 17 times in the last 30 runs. Stability: 43 %)Failed with 0 retries\nfor triage\nI'll look into this.\nI talked to about this and he had a pretty good guess in about two seconds about what the problem is. ReplicationController sync method does this: but DaemonSet controller doesn't. (I guess when DaemonSet was copy-pasted from ReplicationController, this was accidentally removed.) Pod reflector and syncer are running in separate goroutines, and if pod reflector doesn't run then syncer will think there are no pods on the node.\nTo be clear: this is about the test Controller Manager should not create/delete replicas across restart which is the one that is actually failing.\nI'm confused you're saying that this is about , but you're updating . I thought they are about two different tests...\nYes sorry, I put my comment in the wrong issue. Reassigning this one to you, and I'll copy my comment into the other one.\nFirst failure: We check if all namespace are deleted. If other tests failed to delete their namespace this test will fail. I'm going to delete this logic as we already check if we have successfully deleted our namespace. See also for some more details.\nSecond failure: We disable network by running the following command: I checked and it seems that it results in ing master to hang. I suspect that Kubelet hangs on updating node status and depending when it breaks the connection (by timeout) it will fit in test timeouts or not.\nWhat is the default timeout for connections using client libraries to apiserver? I'm checking the code, but can't find it.\nI'm not sure if we set one, but we should. I guess I would expect to see that in the config, w/ default set by this function: , Filip Grzadkowski wrote:\nSo: let's add a timeout, and revert .\nI think that adding a timeout and reverting are orthogonal. I we want to simulate network problems with node-master communication then using REJECT is better anyway. At the same time adding timeout makes sense for many other reasons.\nOK then, let's add a timeout and test both DROP and REJECT-- no sort of network problem should hang a component.\nI think it didn't hang. It just that the kubelet was retrying too slow and sometimes it fit test timeout, sometime it didn't.\nCould we reduce the timeout to make it always fit the test timeout? , Filip Grzadkowski\nI guess we can. I'll try tomorrow.\nIt seems we have a new type of failure: It seems that kubelet on a machine where network is blocked doesn't notice new pod. After reading logs I assume that kubelet doesn't notice that the watch is broken and doesn't try to relist everything. It notices new pod only after watch timeout.\nUgh, I hope we don't have to start sending watch heartbeats.\nIt seems that all recent failures are with the following error: which means the reason is here: - can you please take a look or delegate?\nI did some more debugging of it. I was looking into this failure: There is something strange happening here: run performTemporaryNetworkFailure: succesfully block the load from kubelet: the succesfully wait for node to become not ready: some time nodecontroller successfully removes the pod: replication controller creates a new pod: here is something I don't undertand - because of some reason, the test doesn't observe that the pod actually disappeared. - any thoughts on it?\nBTW - this is exactly the same situation in:\nOK - I think this is related to default 30s deletion grace period.\nYes - it seems that this is exactly the case. I will send out PR fixing this out for review.\nI'm reopening it for tracking. If there won't be any failures within next few days - I will move the test back from flaky and close this issue.\nThanks for fixing, .\nIt seems that the main issue is fixed. However, there was one failure in the last 30 runs: in which one pod simply didn't started (and it was \"preparation\" for the test, not the test itself).\nDuring next 2 days there was exactly one more failure of this test: it is exactly the same as the previous one. However, those are duplicates of So I'm going to move this test out of flaky suite.", "positive_passages": [{"docid": "doc-en-kubernetes-935a80e9380fb718b6022ed1a33deb275424039d5d4580430665f5fad71fa4a9", "text": "func rcByNameContainer(name string, replicas int, image string, labels map[string]string, c api.Container) *api.ReplicationController { // Add \"name\": name to the labels, overwriting if it exists. labels[\"name\"] = name gracePeriod := int64(0) return &api.ReplicationController{ TypeMeta: unversioned.TypeMeta{ Kind: \"ReplicationController\",", "commid": "kubernetes_pr_17939"}], "negative_passages": []} {"query_id": "q-en-kubernetes-ae486ffe03a734aa443d42b073f2c6b23bd4d201c949d5702210dc739a9b466f", "query": "Nodes Network when a minion node becomes unreachable [replication controller] recreates pods scheduled on the unreachable minion node AND allows scheduling of pods on a minion after it rejoins the cluster(Failed 17 times in the last 30 runs. Stability: 43 %)Failed with 0 retries\nfor triage\nI'll look into this.\nI talked to about this and he had a pretty good guess in about two seconds about what the problem is. ReplicationController sync method does this: but DaemonSet controller doesn't. (I guess when DaemonSet was copy-pasted from ReplicationController, this was accidentally removed.) Pod reflector and syncer are running in separate goroutines, and if pod reflector doesn't run then syncer will think there are no pods on the node.\nTo be clear: this is about the test Controller Manager should not create/delete replicas across restart which is the one that is actually failing.\nI'm confused you're saying that this is about , but you're updating . I thought they are about two different tests...\nYes sorry, I put my comment in the wrong issue. Reassigning this one to you, and I'll copy my comment into the other one.\nFirst failure: We check if all namespace are deleted. If other tests failed to delete their namespace this test will fail. I'm going to delete this logic as we already check if we have successfully deleted our namespace. See also for some more details.\nSecond failure: We disable network by running the following command: I checked and it seems that it results in ing master to hang. I suspect that Kubelet hangs on updating node status and depending when it breaks the connection (by timeout) it will fit in test timeouts or not.\nWhat is the default timeout for connections using client libraries to apiserver? I'm checking the code, but can't find it.\nI'm not sure if we set one, but we should. I guess I would expect to see that in the config, w/ default set by this function: , Filip Grzadkowski wrote:\nSo: let's add a timeout, and revert .\nI think that adding a timeout and reverting are orthogonal. I we want to simulate network problems with node-master communication then using REJECT is better anyway. At the same time adding timeout makes sense for many other reasons.\nOK then, let's add a timeout and test both DROP and REJECT-- no sort of network problem should hang a component.\nI think it didn't hang. It just that the kubelet was retrying too slow and sometimes it fit test timeout, sometime it didn't.\nCould we reduce the timeout to make it always fit the test timeout? , Filip Grzadkowski\nI guess we can. I'll try tomorrow.\nIt seems we have a new type of failure: It seems that kubelet on a machine where network is blocked doesn't notice new pod. After reading logs I assume that kubelet doesn't notice that the watch is broken and doesn't try to relist everything. It notices new pod only after watch timeout.\nUgh, I hope we don't have to start sending watch heartbeats.\nIt seems that all recent failures are with the following error: which means the reason is here: - can you please take a look or delegate?\nI did some more debugging of it. I was looking into this failure: There is something strange happening here: run performTemporaryNetworkFailure: succesfully block the load from kubelet: the succesfully wait for node to become not ready: some time nodecontroller successfully removes the pod: replication controller creates a new pod: here is something I don't undertand - because of some reason, the test doesn't observe that the pod actually disappeared. - any thoughts on it?\nBTW - this is exactly the same situation in:\nOK - I think this is related to default 30s deletion grace period.\nYes - it seems that this is exactly the case. I will send out PR fixing this out for review.\nI'm reopening it for tracking. If there won't be any failures within next few days - I will move the test back from flaky and close this issue.\nThanks for fixing, .\nIt seems that the main issue is fixed. However, there was one failure in the last 30 runs: in which one pod simply didn't started (and it was \"preparation\" for the test, not the test itself).\nDuring next 2 days there was exactly one more failure of this test: it is exactly the same as the previous one. However, those are duplicates of So I'm going to move this test out of flaky suite.", "positive_passages": [{"docid": "doc-en-kubernetes-e136b371e0079de48de6de26778afff56dd84dd7b1fb7223abd1eaab28d4c4ee", "text": "Labels: labels, }, Spec: api.PodSpec{ Containers: []api.Container{c}, Containers: []api.Container{c}, TerminationGracePeriodSeconds: &gracePeriod, }, }, },", "commid": "kubernetes_pr_17939"}], "negative_passages": []} {"query_id": "q-en-kubernetes-ae486ffe03a734aa443d42b073f2c6b23bd4d201c949d5702210dc739a9b466f", "query": "Nodes Network when a minion node becomes unreachable [replication controller] recreates pods scheduled on the unreachable minion node AND allows scheduling of pods on a minion after it rejoins the cluster(Failed 17 times in the last 30 runs. Stability: 43 %)Failed with 0 retries\nfor triage\nI'll look into this.\nI talked to about this and he had a pretty good guess in about two seconds about what the problem is. ReplicationController sync method does this: but DaemonSet controller doesn't. (I guess when DaemonSet was copy-pasted from ReplicationController, this was accidentally removed.) Pod reflector and syncer are running in separate goroutines, and if pod reflector doesn't run then syncer will think there are no pods on the node.\nTo be clear: this is about the test Controller Manager should not create/delete replicas across restart which is the one that is actually failing.\nI'm confused you're saying that this is about , but you're updating . I thought they are about two different tests...\nYes sorry, I put my comment in the wrong issue. Reassigning this one to you, and I'll copy my comment into the other one.\nFirst failure: We check if all namespace are deleted. If other tests failed to delete their namespace this test will fail. I'm going to delete this logic as we already check if we have successfully deleted our namespace. See also for some more details.\nSecond failure: We disable network by running the following command: I checked and it seems that it results in ing master to hang. I suspect that Kubelet hangs on updating node status and depending when it breaks the connection (by timeout) it will fit in test timeouts or not.\nWhat is the default timeout for connections using client libraries to apiserver? I'm checking the code, but can't find it.\nI'm not sure if we set one, but we should. I guess I would expect to see that in the config, w/ default set by this function: , Filip Grzadkowski wrote:\nSo: let's add a timeout, and revert .\nI think that adding a timeout and reverting are orthogonal. I we want to simulate network problems with node-master communication then using REJECT is better anyway. At the same time adding timeout makes sense for many other reasons.\nOK then, let's add a timeout and test both DROP and REJECT-- no sort of network problem should hang a component.\nI think it didn't hang. It just that the kubelet was retrying too slow and sometimes it fit test timeout, sometime it didn't.\nCould we reduce the timeout to make it always fit the test timeout? , Filip Grzadkowski\nI guess we can. I'll try tomorrow.\nIt seems we have a new type of failure: It seems that kubelet on a machine where network is blocked doesn't notice new pod. After reading logs I assume that kubelet doesn't notice that the watch is broken and doesn't try to relist everything. It notices new pod only after watch timeout.\nUgh, I hope we don't have to start sending watch heartbeats.\nIt seems that all recent failures are with the following error: which means the reason is here: - can you please take a look or delegate?\nI did some more debugging of it. I was looking into this failure: There is something strange happening here: run performTemporaryNetworkFailure: succesfully block the load from kubelet: the succesfully wait for node to become not ready: some time nodecontroller successfully removes the pod: replication controller creates a new pod: here is something I don't undertand - because of some reason, the test doesn't observe that the pod actually disappeared. - any thoughts on it?\nBTW - this is exactly the same situation in:\nOK - I think this is related to default 30s deletion grace period.\nYes - it seems that this is exactly the case. I will send out PR fixing this out for review.\nI'm reopening it for tracking. If there won't be any failures within next few days - I will move the test back from flaky and close this issue.\nThanks for fixing, .\nIt seems that the main issue is fixed. However, there was one failure in the last 30 runs: in which one pod simply didn't started (and it was \"preparation\" for the test, not the test itself).\nDuring next 2 days there was exactly one more failure of this test: it is exactly the same as the previous one. However, those are duplicates of So I'm going to move this test out of flaky suite.", "positive_passages": [{"docid": "doc-en-kubernetes-f3dfe3bbb20982c454bb6cb44b63075a1252dd5c764644ce47d62feb98b744f8", "text": "// In case of failure or too long waiting time, an error is returned. func waitForRCPodToDisappear(c *client.Client, ns, rcName, podName string) error { label := labels.SelectorFromSet(labels.Set(map[string]string{\"name\": rcName})) return waitForPodToDisappear(c, ns, podName, label, 20*time.Second, 5*time.Minute) // NodeController evicts pod after 5 minutes, so we need timeout greater than that. // Additionally, there can be non-zero grace period, so we are setting 10 minutes // to be on the safe size. return waitForPodToDisappear(c, ns, podName, label, 20*time.Second, 10*time.Minute) } // waitForService waits until the service appears (exist == true), or disappears (exist == false)", "commid": "kubernetes_pr_17939"}], "negative_passages": []} {"query_id": "q-en-kubernetes-13f89768690da6fc511a0bb03e5ffac897995a7d8142248427cada30c91df786", "query": "runs foo and then waits for it to be ready or terminated. If the pod is terminated, it prints the log, and if it's not terminated, it attaches the output. However, it does not handle the case that the pod terminates between returning the ready status and attaching, as can be the case in the : Options to fix this: Fix kubectl: If attach fails, fall back to printing the logs If attach fails, check the status again and if it's terminated, print the logs Fix the flaky test: Prevent the container from exiting (e.g. `sh -c 'echo running in container && sleep 30') Prevent the container from entering ready state (e.g. )\nThanks for debugging. I think we should probably do both: If attach fails and the container is now terminated, fall back to printing logs Make the e2e more robust by not exiting for a while to ensure time to attach. I'm happy to review PRs", "positive_passages": [{"docid": "doc-en-kubernetes-534dbc6650b34e7f5dcb8c8837ce0894e9567283c054b5aecf11e6fd4cd22f59", "text": "return fmt.Errorf(\"pod %s is not running and cannot be attached to; current phase is %s\", p.PodName, pod.Status.Phase) } containerName := p.ContainerName if len(containerName) == 0 { glog.V(4).Infof(\"defaulting container name to %s\", pod.Spec.Containers[0].Name) containerName = pod.Spec.Containers[0].Name } // TODO: refactor with terminal helpers from the edit utility once that is merged var stdin io.Reader tty := p.TTY", "commid": "kubernetes_pr_14764"}], "negative_passages": []} {"query_id": "q-en-kubernetes-13f89768690da6fc511a0bb03e5ffac897995a7d8142248427cada30c91df786", "query": "runs foo and then waits for it to be ready or terminated. If the pod is terminated, it prints the log, and if it's not terminated, it attaches the output. However, it does not handle the case that the pod terminates between returning the ready status and attaching, as can be the case in the : Options to fix this: Fix kubectl: If attach fails, fall back to printing the logs If attach fails, check the status again and if it's terminated, print the logs Fix the flaky test: Prevent the container from exiting (e.g. `sh -c 'echo running in container && sleep 30') Prevent the container from entering ready state (e.g. )\nThanks for debugging. I think we should probably do both: If attach fails and the container is now terminated, fall back to printing logs Make the e2e more robust by not exiting for a while to ensure time to attach. I'm happy to review PRs", "positive_passages": [{"docid": "doc-en-kubernetes-2f7723bd834959ab791ba457ef1fdf85602b0e568ec9a6f7c2f335a53e2e7541", "text": "Name(pod.Name). Namespace(pod.Namespace). SubResource(\"attach\"). Param(\"container\", containerName) Param(\"container\", p.GetContainerName(pod)) return p.Attach.Attach(req, p.Config, stdin, p.Out, p.Err, tty) } // GetContainerName returns the name of the container to attach to, with a fallback. func (p *AttachOptions) GetContainerName(pod *api.Pod) string { if len(p.ContainerName) > 0 { return p.ContainerName } glog.V(4).Infof(\"defaulting container name to %s\", pod.Spec.Containers[0].Name) return pod.Spec.Containers[0].Name } ", "commid": "kubernetes_pr_14764"}], "negative_passages": []} {"query_id": "q-en-kubernetes-13f89768690da6fc511a0bb03e5ffac897995a7d8142248427cada30c91df786", "query": "runs foo and then waits for it to be ready or terminated. If the pod is terminated, it prints the log, and if it's not terminated, it attaches the output. However, it does not handle the case that the pod terminates between returning the ready status and attaching, as can be the case in the : Options to fix this: Fix kubectl: If attach fails, fall back to printing the logs If attach fails, check the status again and if it's terminated, print the logs Fix the flaky test: Prevent the container from exiting (e.g. `sh -c 'echo running in container && sleep 30') Prevent the container from entering ready state (e.g. )\nThanks for debugging. I think we should probably do both: If attach fails and the container is now terminated, fall back to printing logs Make the e2e more robust by not exiting for a while to ensure time to attach. I'm happy to review PRs", "positive_passages": [{"docid": "doc-en-kubernetes-fc49718a23c59eabe25086439c45ec61a9eb2decc56a14d09c817366d6e8b300", "text": "return err } if status == api.PodSucceeded || status == api.PodFailed { return handleLog(c, pod.Namespace, pod.Name, &api.PodLogOptions{Container: pod.Spec.Containers[0].Name}, opts.Out) return handleLog(c, pod.Namespace, pod.Name, &api.PodLogOptions{Container: opts.GetContainerName(pod)}, opts.Out) } opts.Client = c opts.PodName = pod.Name opts.Namespace = pod.Namespace return opts.Run() if err := opts.Run(); err != nil { fmt.Fprintf(opts.Out, \"Error attaching, falling back to logs: %vn\", err) return handleLog(c, pod.Namespace, pod.Name, &api.PodLogOptions{Container: opts.GetContainerName(pod)}, opts.Out) } return nil } func getRestartPolicy(cmd *cobra.Command, interactive bool) (api.RestartPolicy, error) {", "commid": "kubernetes_pr_14764"}], "negative_passages": []} {"query_id": "q-en-kubernetes-9b5fe03698914c610d32064544928591f0a004f7e1f0038b0626d93de3549b78", "query": "I'm seeing a bug where the network plugin fails to configure networking, and yet the containers for this pod are still deployed (without networking configured). It looks like the following is occurring: 1) Kubelet attempts to create infra container in 2) Network plugin fails, returns error code. 3) Kubelet detects error and skips pod creation but does not tear down the infra container 4) On the next call, the kubelet detects that the infra container is running and deploys the other containers in the pod. The fix here is for the kubelet to clean up the infra container in when it receives the error from the network plugin and before it returns. Here are some logs illustrating the steps I described above:\nCleaning up infra container on network plugin error sounds reasonable/", "positive_passages": [{"docid": "doc-en-kubernetes-b05a52db897a9e88e73403c317b6c0ebdacf4b13b37357be0ae15904fc49d948", "text": "if containerChanges.StartInfraContainer && (len(containerChanges.ContainersToStart) > 0) { glog.V(4).Infof(\"Creating pod infra container for %q\", podFullName) podInfraContainerID, err = dm.createPodInfraContainer(pod) if err != nil { glog.Errorf(\"Failed to create pod infra container: %v; Skipping pod %q\", err, podFullName) return err } // Call the networking plugin if err == nil { err = dm.networkPlugin.SetUpPod(pod.Namespace, pod.Name, podInfraContainerID) } err = dm.networkPlugin.SetUpPod(pod.Namespace, pod.Name, podInfraContainerID) if err != nil { glog.Errorf(\"Failed to create pod infra container: %v; Skipping pod %q\", err, podFullName) // Delete infra container if delErr := dm.KillContainerInPod(kubecontainer.ContainerID{ ID: string(podInfraContainerID), Type: \"docker\"}, nil, pod); delErr != nil { glog.Warningf(\"Clear infra container failed for pod %q: %v\", podFullName, delErr) } return err }", "commid": "kubernetes_pr_15156"}], "negative_passages": []} {"query_id": "q-en-kubernetes-bc481a4627aea373f2de65a2d965f4d31ac5088d41d6a75d1a607aee3be72971", "query": "Unit tests on our CI have apparently been failing for 2 weeks and no one noticed (oops!)... The PR that seems to have caused the failures is . The failing test is - . Full Test Log: I'm not sure why this is failing on our CI and not on shippable. Has anyone else seen something similar? That test only seems to fail on the second run through with . Our unit tests run in a container built specifically for testing k8s-mesos: The Dockerfile for the env: Is anyone else able to repro with the above env or their own?\n/cc\nthat test was around a week ago, it's doesn't seem possible to me that it was failing since 2 weeks ago. Anyway, we have few duplications of this issue: this one And the PR where is working on: It's a little bit difficult to me to follow all the conversations in all the differente places about the same bug, could it be possible to unificate some? Since you provided the environment I will try and take a look to reproduce it locally. Thanks!\nI think I've got a working patch in\nClosing in favor of &", "positive_passages": [{"docid": "doc-en-kubernetes-d17f0d3d055e0d41a00234c8e4268df879f0dbcffebdbcd166374b528301854f", "text": "package flocker import ( \"io/ioutil\" \"os\" \"testing\" flockerClient \"github.com/ClusterHQ/flocker-go\"", "commid": "kubernetes_pr_15158"}], "negative_passages": []} {"query_id": "q-en-kubernetes-bc481a4627aea373f2de65a2d965f4d31ac5088d41d6a75d1a607aee3be72971", "query": "Unit tests on our CI have apparently been failing for 2 weeks and no one noticed (oops!)... The PR that seems to have caused the failures is . The failing test is - . Full Test Log: I'm not sure why this is failing on our CI and not on shippable. Has anyone else seen something similar? That test only seems to fail on the second run through with . Our unit tests run in a container built specifically for testing k8s-mesos: The Dockerfile for the env: Is anyone else able to repro with the above env or their own?\n/cc\nthat test was around a week ago, it's doesn't seem possible to me that it was failing since 2 weeks ago. Anyway, we have few duplications of this issue: this one And the PR where is working on: It's a little bit difficult to me to follow all the conversations in all the differente places about the same bug, could it be possible to unificate some? Since you provided the environment I will try and take a look to reproduce it locally. Thanks!\nI think I've got a working patch in\nClosing in favor of &", "positive_passages": [{"docid": "doc-en-kubernetes-801e85fc3efb9c48ee35aa40b2866b84d45ce6d9df2ad3b730824eaeb69f888f", "text": "const pluginName = \"kubernetes.io/flocker\" func newInitializedVolumePlugMgr() volume.VolumePluginMgr { func newInitializedVolumePlugMgr(t *testing.T) (volume.VolumePluginMgr, string) { plugMgr := volume.VolumePluginMgr{} plugMgr.InitPlugins(ProbeVolumePlugins(), volume.NewFakeVolumeHost(\"/foo/bar\", nil, nil)) return plugMgr dir, err := ioutil.TempDir(\"\", \"flocker\") assert.NoError(t, err) plugMgr.InitPlugins(ProbeVolumePlugins(), volume.NewFakeVolumeHost(dir, nil, nil)) return plugMgr, dir } func TestGetByName(t *testing.T) { assert := assert.New(t) plugMgr := newInitializedVolumePlugMgr() plugMgr, _ := newInitializedVolumePlugMgr(t) plug, err := plugMgr.FindPluginByName(pluginName) assert.NotNil(plug, \"Can't find the plugin by name\")", "commid": "kubernetes_pr_15158"}], "negative_passages": []} {"query_id": "q-en-kubernetes-bc481a4627aea373f2de65a2d965f4d31ac5088d41d6a75d1a607aee3be72971", "query": "Unit tests on our CI have apparently been failing for 2 weeks and no one noticed (oops!)... The PR that seems to have caused the failures is . The failing test is - . Full Test Log: I'm not sure why this is failing on our CI and not on shippable. Has anyone else seen something similar? That test only seems to fail on the second run through with . Our unit tests run in a container built specifically for testing k8s-mesos: The Dockerfile for the env: Is anyone else able to repro with the above env or their own?\n/cc\nthat test was around a week ago, it's doesn't seem possible to me that it was failing since 2 weeks ago. Anyway, we have few duplications of this issue: this one And the PR where is working on: It's a little bit difficult to me to follow all the conversations in all the differente places about the same bug, could it be possible to unificate some? Since you provided the environment I will try and take a look to reproduce it locally. Thanks!\nI think I've got a working patch in\nClosing in favor of &", "positive_passages": [{"docid": "doc-en-kubernetes-83a3639b9e979760d052c677e590a3771ee40309575c1235f6b7ffab9ddc6061", "text": "func TestCanSupport(t *testing.T) { assert := assert.New(t) plugMgr := newInitializedVolumePlugMgr() plugMgr, _ := newInitializedVolumePlugMgr(t) plug, err := plugMgr.FindPluginByName(pluginName) assert.NoError(err)", "commid": "kubernetes_pr_15158"}], "negative_passages": []} {"query_id": "q-en-kubernetes-bc481a4627aea373f2de65a2d965f4d31ac5088d41d6a75d1a607aee3be72971", "query": "Unit tests on our CI have apparently been failing for 2 weeks and no one noticed (oops!)... The PR that seems to have caused the failures is . The failing test is - . Full Test Log: I'm not sure why this is failing on our CI and not on shippable. Has anyone else seen something similar? That test only seems to fail on the second run through with . Our unit tests run in a container built specifically for testing k8s-mesos: The Dockerfile for the env: Is anyone else able to repro with the above env or their own?\n/cc\nthat test was around a week ago, it's doesn't seem possible to me that it was failing since 2 weeks ago. Anyway, we have few duplications of this issue: this one And the PR where is working on: It's a little bit difficult to me to follow all the conversations in all the differente places about the same bug, could it be possible to unificate some? Since you provided the environment I will try and take a look to reproduce it locally. Thanks!\nI think I've got a working patch in\nClosing in favor of &", "positive_passages": [{"docid": "doc-en-kubernetes-c8787d0a0fa0bd3bd32ca090d61337ea76a0092dd149b706653fa0b322a80046", "text": "func TestNewBuilder(t *testing.T) { assert := assert.New(t) plugMgr := newInitializedVolumePlugMgr() plugMgr, _ := newInitializedVolumePlugMgr(t) plug, err := plugMgr.FindPluginByName(pluginName) assert.NoError(err)", "commid": "kubernetes_pr_15158"}], "negative_passages": []} {"query_id": "q-en-kubernetes-bc481a4627aea373f2de65a2d965f4d31ac5088d41d6a75d1a607aee3be72971", "query": "Unit tests on our CI have apparently been failing for 2 weeks and no one noticed (oops!)... The PR that seems to have caused the failures is . The failing test is - . Full Test Log: I'm not sure why this is failing on our CI and not on shippable. Has anyone else seen something similar? That test only seems to fail on the second run through with . Our unit tests run in a container built specifically for testing k8s-mesos: The Dockerfile for the env: Is anyone else able to repro with the above env or their own?\n/cc\nthat test was around a week ago, it's doesn't seem possible to me that it was failing since 2 weeks ago. Anyway, we have few duplications of this issue: this one And the PR where is working on: It's a little bit difficult to me to follow all the conversations in all the differente places about the same bug, could it be possible to unificate some? Since you provided the environment I will try and take a look to reproduce it locally. Thanks!\nI think I've got a working patch in\nClosing in favor of &", "positive_passages": [{"docid": "doc-en-kubernetes-6acb5dd0a7f69ec89e843d0e92e60f58bf978f0ae06134cef69b79b98895a710", "text": "assert := assert.New(t) plugMgr := newInitializedVolumePlugMgr() plugMgr, rootDir := newInitializedVolumePlugMgr(t) if rootDir != \"\" { defer os.RemoveAll(rootDir) } plug, err := plugMgr.FindPluginByName(flockerPluginName) assert.NoError(err)", "commid": "kubernetes_pr_15158"}], "negative_passages": []} {"query_id": "q-en-kubernetes-5d8539be161cacf223528437869590f13423a31b1176c381211bcba6b79e0cca", "query": "When trying to edit multiple resources, for example: will panic. Ref as the possible cause cc\nif was the cause, any fix needs to happen in 1.1 as well, was already cherry picked there\nAlso shows that we need a test for this case.\nis the cause. When we edit multiple resources (i.e., list-type objects), the list's Accessor.annotations (type ) is nil and will panic when UpdateApplyAnnotation tries to SetAnnotations ().", "positive_passages": [{"docid": "doc-en-kubernetes-f740146422fa9d92b4512091e13b031ffaa922c4132cc8fe40f7222b881d84d0", "text": "kube::test::get_object_assert 'rc mock2' \"{{${labels_field}.status}}\" 'replaced' fi fi # Command: kubectl edit multiple resources temp_editor=\"${KUBE_TEMP}/tmp-editor.sh\" echo -e '#!/bin/bashnsed -i \"s/status: replaced/status: edited/g\" $@' > \"${temp_editor}\" chmod +x \"${temp_editor}\" EDITOR=\"${temp_editor}\" kubectl edit \"${kube_flags[@]}\" -f \"${file}\" # Post-condition: mock service (and mock2) and mock rc (and mock2) are edited if [ \"$has_svc\" = true ]; then kube::test::get_object_assert 'services mock' \"{{${labels_field}.status}}\" 'edited' if [ \"$two_svcs\" = true ]; then kube::test::get_object_assert 'services mock2' \"{{${labels_field}.status}}\" 'edited' fi fi if [ \"$has_rc\" = true ]; then kube::test::get_object_assert 'rc mock' \"{{${labels_field}.status}}\" 'edited' if [ \"$two_rcs\" = true ]; then kube::test::get_object_assert 'rc mock2' \"{{${labels_field}.status}}\" 'edited' fi fi # cleaning rm \"${temp_editor}\" # Command # We need to set --overwrite, because otherwise, if the first attempt to run \"kubectl label\" # fails on some, but not all, of the resources, retries will fail because it tries to modify", "commid": "kubernetes_pr_15980"}], "negative_passages": []} {"query_id": "q-en-kubernetes-5d8539be161cacf223528437869590f13423a31b1176c381211bcba6b79e0cca", "query": "When trying to edit multiple resources, for example: will panic. Ref as the possible cause cc\nif was the cause, any fix needs to happen in 1.1 as well, was already cherry picked there\nAlso shows that we need a test for this case.\nis the cause. When we edit multiple resources (i.e., list-type objects), the list's Accessor.annotations (type ) is nil and will panic when UpdateApplyAnnotation tries to SetAnnotations ().", "positive_passages": [{"docid": "doc-en-kubernetes-ba767ac21fd089e7575dff088c09c108566a651f3d84cd52a921a99d67c1a972", "text": "} func (a genericAccessor) SetAnnotations(annotations map[string]string) { if a.annotations == nil { emptyAnnotations := make(map[string]string) a.annotations = &emptyAnnotations } *a.annotations = annotations }", "commid": "kubernetes_pr_15980"}], "negative_passages": []} {"query_id": "q-en-kubernetes-5d8539be161cacf223528437869590f13423a31b1176c381211bcba6b79e0cca", "query": "When trying to edit multiple resources, for example: will panic. Ref as the possible cause cc\nif was the cause, any fix needs to happen in 1.1 as well, was already cherry picked there\nAlso shows that we need a test for this case.\nis the cause. When we edit multiple resources (i.e., list-type objects), the list's Accessor.annotations (type ) is nil and will panic when UpdateApplyAnnotation tries to SetAnnotations ().", "positive_passages": [{"docid": "doc-en-kubernetes-ad36e5a41968266b5d487d1b5190950091462c03fc5c10c9979416c596cba087", "text": "defaultVersion := cmdutil.OutputVersion(cmd, clientConfig.Version) results := editResults{} for { obj, err := resource.AsVersionedObject(infos, false, defaultVersion) objs, err := resource.AsVersionedObjects(infos, defaultVersion) if err != nil { return preservedFile(err, results.file, out) } // if input object is a list, traverse and edit each item one at a time for _, obj := range objs { // TODO: add an annotating YAML printer that can print inline comments on each field, // including descriptions or validation errors // generate the file to edit buf := &bytes.Buffer{} if err := results.header.writeTo(buf); err != nil { return preservedFile(err, results.file, out) } if err := printer.PrintObj(obj, buf); err != nil { return preservedFile(err, results.file, out) } original := buf.Bytes() // TODO: add an annotating YAML printer that can print inline comments on each field, // including descriptions or validation errors // generate the file to edit buf := &bytes.Buffer{} if err := results.header.writeTo(buf); err != nil { return preservedFile(err, results.file, out) } if err := printer.PrintObj(obj, buf); err != nil { return preservedFile(err, results.file, out) } original := buf.Bytes() // launch the editor edit := editor.NewDefaultEditor() edited, file, err := edit.LaunchTempFile(\"kubectl-edit-\", ext, buf) if err != nil { return preservedFile(err, results.file, out) } // launch the editor edit := editor.NewDefaultEditor() edited, file, err := edit.LaunchTempFile(\"kubectl-edit-\", ext, buf) if err != nil { return preservedFile(err, results.file, out) } // cleanup any file from the previous pass if len(results.file) > 0 { os.Remove(results.file) } // cleanup any file from the previous pass if len(results.file) > 0 { os.Remove(results.file) } glog.V(4).Infof(\"User edited:n%s\", string(edited)) fmt.Printf(\"User edited:n%s\", string(edited)) lines, err := hasLines(bytes.NewBuffer(edited)) if err != nil { return preservedFile(err, file, out) } if bytes.Equal(original, edited) { if len(results.edit) > 0 { preservedFile(nil, file, out) } else { os.Remove(file) glog.V(4).Infof(\"User edited:n%s\", string(edited)) lines, err := hasLines(bytes.NewBuffer(edited)) if err != nil { return preservedFile(err, file, out) } fmt.Fprintln(out, \"Edit cancelled, no changes made.\") return nil } if !lines { if len(results.edit) > 0 { preservedFile(nil, file, out) } else { os.Remove(file) // Compare content without comments if bytes.Equal(stripComments(original), stripComments(edited)) { if len(results.edit) > 0 { preservedFile(nil, file, out) } else { os.Remove(file) } fmt.Fprintln(out, \"Edit cancelled, no changes made.\") continue } if !lines { if len(results.edit) > 0 { preservedFile(nil, file, out) } else { os.Remove(file) } fmt.Fprintln(out, \"Edit cancelled, saved file was empty.\") continue } fmt.Fprintln(out, \"Edit cancelled, saved file was empty.\") return nil } results = editResults{ file: file, } results = editResults{ file: file, } // parse the edited file updates, err := rmap.InfoForData(edited, \"edited-file\") if err != nil { return preservedFile(err, file, out) } // parse the edited file updates, err := rmap.InfoForData(edited, \"edited-file\") if err != nil { return fmt.Errorf(\"The edited file had a syntax error: %v\", err) } // annotate the edited object for kubectl apply if err := kubectl.UpdateApplyAnnotation(updates); err != nil { return preservedFile(err, file, out) } // annotate the edited object for kubectl apply if err := kubectl.UpdateApplyAnnotation(updates); err != nil { return preservedFile(err, file, out) } visitor := resource.NewFlattenListVisitor(updates, rmap) visitor := resource.NewFlattenListVisitor(updates, rmap) // need to make sure the original namespace wasn't changed while editing if err = visitor.Visit(resource.RequireNamespace(cmdNamespace)); err != nil { return preservedFile(err, file, out) } // need to make sure the original namespace wasn't changed while editing if err = visitor.Visit(resource.RequireNamespace(cmdNamespace)); err != nil { return preservedFile(err, file, out) } // use strategic merge to create a patch originalJS, err := yaml.ToJSON(original) if err != nil { return preservedFile(err, file, out) } editedJS, err := yaml.ToJSON(edited) if err != nil { return preservedFile(err, file, out) } patch, err := strategicpatch.CreateStrategicMergePatch(originalJS, editedJS, obj) // TODO: change all jsonmerge to strategicpatch // for checking preconditions preconditions := []jsonmerge.PreconditionFunc{} if err != nil { glog.V(4).Infof(\"Unable to calculate diff, no merge is possible: %v\", err) return preservedFile(err, file, out) } else { preconditions = append(preconditions, jsonmerge.RequireKeyUnchanged(\"apiVersion\")) preconditions = append(preconditions, jsonmerge.RequireKeyUnchanged(\"kind\")) preconditions = append(preconditions, jsonmerge.RequireMetadataKeyUnchanged(\"name\")) results.version = defaultVersion } // use strategic merge to create a patch originalJS, err := yaml.ToJSON(original) if err != nil { return preservedFile(err, file, out) } editedJS, err := yaml.ToJSON(edited) if err != nil { return preservedFile(err, file, out) } patch, err := strategicpatch.CreateStrategicMergePatch(originalJS, editedJS, obj) // TODO: change all jsonmerge to strategicpatch // for checking preconditions preconditions := []jsonmerge.PreconditionFunc{} if err != nil { glog.V(4).Infof(\"Unable to calculate diff, no merge is possible: %v\", err) return preservedFile(err, file, out) } else { preconditions = append(preconditions, jsonmerge.RequireKeyUnchanged(\"apiVersion\")) preconditions = append(preconditions, jsonmerge.RequireKeyUnchanged(\"kind\")) preconditions = append(preconditions, jsonmerge.RequireMetadataKeyUnchanged(\"name\")) results.version = defaultVersion } if hold, msg := jsonmerge.TestPreconditionsHold(patch, preconditions); !hold { fmt.Fprintf(out, \"error: %s\", msg) return preservedFile(nil, file, out) } if hold, msg := jsonmerge.TestPreconditionsHold(patch, preconditions); !hold { fmt.Fprintf(out, \"error: %s\", msg) return preservedFile(nil, file, out) } err = visitor.Visit(func(info *resource.Info, err error) error { patched, err := resource.NewHelper(info.Client, info.Mapping).Patch(info.Namespace, info.Name, api.StrategicMergePatchType, patch) if err != nil { fmt.Fprintln(out, results.addError(err, info)) err = visitor.Visit(func(info *resource.Info, err error) error { patched, err := resource.NewHelper(info.Client, info.Mapping).Patch(info.Namespace, info.Name, api.StrategicMergePatchType, patch) if err != nil { fmt.Fprintln(out, results.addError(err, info)) return nil } info.Refresh(patched, true) cmdutil.PrintSuccess(mapper, false, out, info.Mapping.Resource, info.Name, \"edited\") return nil }) if err != nil { return preservedFile(err, file, out) } info.Refresh(patched, true) cmdutil.PrintSuccess(mapper, false, out, info.Mapping.Resource, info.Name, \"edited\") return nil }) if err != nil { return preservedFile(err, file, out) } if results.retryable > 0 { fmt.Fprintf(out, \"You can run `kubectl replace -f %s` to try this update again.n\", file) return errExit } if results.conflict > 0 { fmt.Fprintf(out, \"You must update your local resource version and run `kubectl replace -f %s` to overwrite the remote changes.n\", file) return errExit if results.retryable > 0 { fmt.Fprintf(out, \"You can run `kubectl replace -f %s` to try this update again.n\", file) return errExit } if results.conflict > 0 { fmt.Fprintf(out, \"You must update your local resource version and run `kubectl replace -f %s` to overwrite the remote changes.n\", file) return errExit } if len(results.edit) == 0 { if results.notfound == 0 { os.Remove(file) } else { fmt.Fprintf(out, \"The edits you made on deleted resources have been saved to %qn\", file) } } } if len(results.edit) == 0 { if results.notfound == 0 { os.Remove(file) } else { fmt.Fprintf(out, \"The edits you made on deleted resources have been saved to %qn\", file) } return nil }", "commid": "kubernetes_pr_15980"}], "negative_passages": []} {"query_id": "q-en-kubernetes-5d8539be161cacf223528437869590f13423a31b1176c381211bcba6b79e0cca", "query": "When trying to edit multiple resources, for example: will panic. Ref as the possible cause cc\nif was the cause, any fix needs to happen in 1.1 as well, was already cherry picked there\nAlso shows that we need a test for this case.\nis the cause. When we edit multiple resources (i.e., list-type objects), the list's Accessor.annotations (type ) is nil and will panic when UpdateApplyAnnotation tries to SetAnnotations ().", "positive_passages": [{"docid": "doc-en-kubernetes-04bdb3d2bb63267cc2d28461f18221496da6ec0095e36e4d303c9ec939a231a7", "text": "} return false, nil } // stripComments will transform a YAML file into JSON, thus dropping any comments // in it. Note that if the given file has a syntax error, the transformation will // fail and we will manually drop all comments from the file. func stripComments(file []byte) []byte { stripped, err := yaml.ToJSON(file) if err != nil { stripped = manualStrip(file) } return stripped } // manualStrip is used for dropping comments from a YAML file func manualStrip(file []byte) []byte { stripped := []byte{} for _, line := range bytes.Split(file, []byte(\"n\")) { if bytes.HasPrefix(bytes.TrimSpace(line), []byte(\"#\")) { continue } stripped = append(stripped, line...) stripped = append(stripped, 'n') } return stripped } ", "commid": "kubernetes_pr_15980"}], "negative_passages": []} {"query_id": "q-en-kubernetes-fc39169146c9455c303f0970f8a8e8f06dd10bfe4e4173704fe0272d20d254f8", "query": "We have to spend times on figuring out what are the mount error code means from issues and . In , the error message is and is great for diagnosing mount problem. The console message better be returned rather than just error code.", "positive_passages": [{"docid": "doc-en-kubernetes-a88437e65a3a8055f5357af04ee62d0026637c40bdca373d5645eb251088725e", "text": "command := exec.Command(\"mount\", mountArgs...) output, err := command.CombinedOutput() if err != nil { glog.Errorf(\"Mount failed: %vnMounting arguments: %s %s %s %vnOutput: %sn\", return fmt.Errorf(\"Mount failed: %vnMounting arguments: %s %s %s %vnOutput: %sn\", err, source, target, fstype, options, string(output)) } return err", "commid": "kubernetes_pr_16033"}], "negative_passages": []} {"query_id": "q-en-kubernetes-fc39169146c9455c303f0970f8a8e8f06dd10bfe4e4173704fe0272d20d254f8", "query": "We have to spend times on figuring out what are the mount error code means from issues and . In , the error message is and is great for diagnosing mount problem. The console message better be returned rather than just error code.", "positive_passages": [{"docid": "doc-en-kubernetes-3b7a9165243d003230e6f2b8d01c3faa3345eb780025309de7344e15deeba32a", "text": "command := exec.Command(\"umount\", target) output, err := command.CombinedOutput() if err != nil { glog.Errorf(\"Unmount failed: %vnUnmounting arguments: %snOutput: %sn\", err, target, string(output)) return err return fmt.Errorf(\"Unmount failed: %vnUnmounting arguments: %snOutput: %sn\", err, target, string(output)) } return nil }", "commid": "kubernetes_pr_16033"}], "negative_passages": []} {"query_id": "q-en-kubernetes-42a2dcc81f7ae4dc8763c9cd41f1b3c90a02e5ad103b4424eb3978e5cae8304c", "query": "Heapster pod: And logs: cc p0 for triage\nDid the required node scope change?\nIts either the API on the console or monitoring write scopes on the node. , Mike Danese wrote:\nI think this affects only GCE\nI will fix this issue since is on vacation.\nNeed to cherry pick so i'll leave open\nis out for review.\nhas been closed and the cherrypick here () has been merged so I'm closing this issue. Thanks everyone for the debugging.", "positive_passages": [{"docid": "doc-en-kubernetes-52fcfbd57004797122bff061220df57223172da5a11b583f491199c8465debdc", "text": "# google - Heapster, Google Cloud Monitoring, and Google Cloud Logging # googleinfluxdb - Enable influxdb and google (except GCM) # standalone - Heapster only. Metrics available via Heapster REST API. ENABLE_CLUSTER_MONITORING=\"${KUBE_ENABLE_CLUSTER_MONITORING:-googleinfluxdb}\" ENABLE_CLUSTER_MONITORING=\"${KUBE_ENABLE_CLUSTER_MONITORING:-influxdb}\" # Optional: Enable node logging. ENABLE_NODE_LOGGING=\"${KUBE_ENABLE_NODE_LOGGING:-true}\"", "commid": "kubernetes_pr_16214"}], "negative_passages": []} {"query_id": "q-en-kubernetes-42a2dcc81f7ae4dc8763c9cd41f1b3c90a02e5ad103b4424eb3978e5cae8304c", "query": "Heapster pod: And logs: cc p0 for triage\nDid the required node scope change?\nIts either the API on the console or monitoring write scopes on the node. , Mike Danese wrote:\nI think this affects only GCE\nI will fix this issue since is on vacation.\nNeed to cherry pick so i'll leave open\nis out for review.\nhas been closed and the cherrypick here () has been merged so I'm closing this issue. Thanks everyone for the debugging.", "positive_passages": [{"docid": "doc-en-kubernetes-bb3448a2f8231e96978bd32caa3dfe4cef5816826281a0af9bc6816a74da177e", "text": "AUTOSCALER_MIN_NODES=\"${KUBE_AUTOSCALER_MIN_NODES:-1}\" AUTOSCALER_MAX_NODES=\"${KUBE_AUTOSCALER_MAX_NODES:-${NUM_MINIONS}}\" TARGET_NODE_UTILIZATION=\"${KUBE_TARGET_NODE_UTILIZATION:-0.7}\" ENABLE_CLUSTER_MONITORING=googleinfluxdb fi # Optional: Enable deployment experimental feature, not ready for production use.", "commid": "kubernetes_pr_16214"}], "negative_passages": []} {"query_id": "q-en-kubernetes-f4ab746f40c555f197660ba2d46053a482240358ec0954f57c2d8c122630e391", "query": "has TODOs for how to bring up cluster addons like monitoring and logging. We should fill these in. requested by user:", "positive_passages": [{"docid": "doc-en-kubernetes-f659f8b8c8d5851aa52621379991adbeae0a55441c3bc028f358bd52a40d22aa", "text": "# Kubernetes Cluster Admin Guide: Cluster Components **Table of Contents** - [Kubernetes Cluster Admin Guide: Cluster Components](#kubernetes-cluster-admin-guide-cluster-components) - [Master Components](#master-components) - [kube-apiserver](#kube-apiserver) - [etcd](#etcd) - [kube-controller-manager](#kube-controller-manager) - [kube-scheduler](#kube-scheduler) - [addons](#addons) - [DNS](#dns) - [User interface](#user-interface) - [Container Resource Monitoring](#container-resource-monitoring) - [Cluster-level Logging](#cluster-level-logging) - [Node components](#node-components) - [kubelet](#kubelet) - [kube-proxy](#kube-proxy) - [docker](#docker) - [rkt](#rkt) - [monit](#monit) - [fluentd](#fluentd) This document outlines the various binary components that need to run to deliver a functioning Kubernetes cluster.", "commid": "kubernetes_pr_16328"}], "negative_passages": []} {"query_id": "q-en-kubernetes-f4ab746f40c555f197660ba2d46053a482240358ec0954f57c2d8c122630e391", "query": "has TODOs for how to bring up cluster addons like monitoring and logging. We should fill these in. requested by user:", "positive_passages": [{"docid": "doc-en-kubernetes-f6f45ce8d69583315b55867b88f1db6794354dc05c3fbd3c33d9982a5cec832a", "text": "Addon objects are created in the \"kube-system\" namespace. Example addons are: * [DNS](http://releases.k8s.io/HEAD/cluster/addons/dns/) provides cluster local DNS. * [kube-ui](http://releases.k8s.io/HEAD/cluster/addons/kube-ui/) provides a graphical UI for the cluster. * [fluentd-elasticsearch](http://releases.k8s.io/HEAD/cluster/addons/fluentd-elasticsearch/) provides log storage. Also see the [gcp version](http://releases.k8s.io/HEAD/cluster/addons/fluentd-gcp/). * [cluster-monitoring](http://releases.k8s.io/HEAD/cluster/addons/cluster-monitoring/) provides monitoring for the cluster. #### DNS While the other addons are not strictly required, all Kubernetes clusters should have [cluster DNS](dns.md), as many examples rely on it. Cluster DNS is a DNS server, in addition to the other DNS server(s) in your environment, which serves DNS records for Kubernetes services. Containers started by Kubernetes automatically include this DNS server in their DNS searches. #### User interface The kube-ui provides a read-only overview of the cluster state. Access [the UI using kubectl proxy](../user-guide/connecting-to-applications-proxy.md#connecting-to-the-kube-ui-service-from-your-local-workstation) #### Container Resource Monitoring [Container Resource Monitoring](../user-guide/monitoring.md) records generic time-series metrics about containers in a central database, and provides a UI for browsing that data. #### Cluster-level Logging [Container Logging](../user-guide/monitoring.md) saves container logs to a central log store with search/browsing interface. There are two implementations: * [Cluster-level logging to Google Cloud Logging]( docs/user-guide/logging.md#cluster-level-logging-to-google-cloud-logging) * [Cluster-level Logging with Elasticsearch and Kibana]( docs/user-guide/logging.md#cluster-level-logging-with-elasticsearch-and-kibana) ## Node components", "commid": "kubernetes_pr_16328"}], "negative_passages": []} {"query_id": "q-en-kubernetes-f4ab746f40c555f197660ba2d46053a482240358ec0954f57c2d8c122630e391", "query": "has TODOs for how to bring up cluster addons like monitoring and logging. We should fill these in. requested by user:", "positive_passages": [{"docid": "doc-en-kubernetes-c00006965ef4bfc4ffcc3f9521519cd0b3dcdc0b42b97d0405c6385d72b23cc6", "text": "`monit` is a lightweight process babysitting system for keeping kubelet and docker running. ### fluentd `fluentd` is a daemon which helps provide [cluster-level logging](#cluster-level-logging). [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/admin/cluster-components.md?pixel)]()", "commid": "kubernetes_pr_16328"}], "negative_passages": []} {"query_id": "q-en-kubernetes-f4ab746f40c555f197660ba2d46053a482240358ec0954f57c2d8c122630e391", "query": "has TODOs for how to bring up cluster addons like monitoring and logging. We should fill these in. requested by user:", "positive_passages": [{"docid": "doc-en-kubernetes-906aecf5a78ae3aee282a885feeac406b1acfd719ea9462f0314052f03183e77", "text": "- [Scheduler pod template](#scheduler-pod-template) - [Controller Manager Template](#controller-manager-template) - [Starting and Verifying Apiserver, Scheduler, and Controller Manager](#starting-and-verifying-apiserver-scheduler-and-controller-manager) - [Logging](#logging) - [Monitoring](#monitoring) - [DNS](#dns) - [Starting Cluster Services](#starting-cluster-services) - [Troubleshooting](#troubleshooting) - [Running validate-cluster](#running-validate-cluster) - [Inspect pods and services](#inspect-pods-and-services)", "commid": "kubernetes_pr_16328"}], "negative_passages": []} {"query_id": "q-en-kubernetes-f4ab746f40c555f197660ba2d46053a482240358ec0954f57c2d8c122630e391", "query": "has TODOs for how to bring up cluster addons like monitoring and logging. We should fill these in. requested by user:", "positive_passages": [{"docid": "doc-en-kubernetes-c792047fe9a77e92289bf8618ccaee1fbd3a23a9fdfc1a8f17176feedfae1ce8", "text": "- Otherwise, if taking the firewall-based security approach - `--api-servers=http://$MASTER_IP` - `--config=/etc/kubernetes/manifests` - `--cluster-dns=` to the address of the DNS server you will setup (see [Starting Addons](#starting-addons).) - `--cluster-dns=` to the address of the DNS server you will setup (see [Starting Cluster Services](#starting-cluster-services).) - `--cluster-domain=` to the dns domain prefix to use for cluster DNS addresses. - `--docker-root=` - `--root-dir=`", "commid": "kubernetes_pr_16328"}], "negative_passages": []} {"query_id": "q-en-kubernetes-f4ab746f40c555f197660ba2d46053a482240358ec0954f57c2d8c122630e391", "query": "has TODOs for how to bring up cluster addons like monitoring and logging. We should fill these in. requested by user:", "positive_passages": [{"docid": "doc-en-kubernetes-3806feaac5ab9435de66d36611c127e2e09f0cdc692f275466ccd90bbe17df57", "text": "You should soon be able to see all your nodes by running the `kubectl get nodes` command. Otherwise, you will need to manually create node objects. ### Logging **TODO** talk about starting Logging. ### Monitoring **TODO** talk about starting Monitoring. ### DNS **TODO** talk about starting DNS. ### Starting Cluster Services You will want to complete your Kubernetes clusters by adding cluster-wide services. These are sometimes called *addons*, and [an overview of their purpose is in the admin guide]( ../../docs/admin/cluster-components.md#addons). Notes for setting up each cluster service are given below: * Cluster DNS: * required for many kubernetes examples * [Setup instructions](http://releases.k8s.io/HEAD/cluster/addons/dns/) * [Admin Guide](../admin/dns.md) * Cluster-level Logging * Multiple implementations with different storage backends and UIs. * [Elasticsearch Backend Setup Instructions](http://releases.k8s.io/HEAD/cluster/addons/fluentd-elasticsearch/) * [Google Cloud Logging Backend Setup Instructions](http://releases.k8s.io/HEAD/cluster/addons/fluentd-gcp/). * Both require running fluentd on each node. * [User Guide](../user-guide/logging.md) * Container Resource Monitoring * [Setup instructions](http://releases.k8s.io/HEAD/cluster/addons/cluster-monitoring/) * GUI * [Setup instructions](http://releases.k8s.io/HEAD/cluster/addons/kube-ui/) cluster. ## Troubleshooting", "commid": "kubernetes_pr_16328"}], "negative_passages": []} {"query_id": "q-en-kubernetes-f4ab746f40c555f197660ba2d46053a482240358ec0954f57c2d8c122630e391", "query": "has TODOs for how to bring up cluster addons like monitoring and logging. We should fill these in. requested by user:", "positive_passages": [{"docid": "doc-en-kubernetes-d666f3b9348166c8e27cf0d6f177ecf3cb28d250971037c2ff5b8133da1ce2a0", "text": "# Connecting to applications: kubectl proxy and apiserver proxy - [Connecting to applications: kubectl proxy and apiserver proxy](#connecting-to-applications-kubectl-proxy-and-apiserver-proxy) - [Getting the apiserver proxy URL of kube-ui](#getting-the-apiserver-proxy-url-of-kube-ui) - [Connecting to the kube-ui service from your local workstation](#connecting-to-the-kube-ui-service-from-your-local-workstation) You have seen the [basics](accessing-the-cluster.md) about `kubectl proxy` and `apiserver proxy`. This guide shows how to use them together to access a service([kube-ui](ui.md)) running on the Kubernetes cluster from your workstation.", "commid": "kubernetes_pr_16328"}], "negative_passages": []} {"query_id": "q-en-kubernetes-f4ab746f40c555f197660ba2d46053a482240358ec0954f57c2d8c122630e391", "query": "has TODOs for how to bring up cluster addons like monitoring and logging. We should fill these in. requested by user:", "positive_passages": [{"docid": "doc-en-kubernetes-5334e52850e973dddc416900e9822fc089a9f2dffa3d330cd086fe3333eaf6d3", "text": "# Logging **Table of Contents** - [Logging](#logging) - [Logging by Kubernetes Components](#logging-by-kubernetes-components) - [Examining the logs of running containers](#examining-the-logs-of-running-containers) - [Cluster level logging to Google Cloud Logging](#cluster-level-logging-to-google-cloud-logging) - [Cluster level logging with Elasticsearch and Kibana](#cluster-level-logging-with-elasticsearch-and-kibana) - [Ingesting Application Log Files](#ingesting-application-log-files) - [Known issues](#known-issues) ## Logging by Kubernetes Components Kubernetes components, such as kubelet and apiserver, use the [glog](https://godoc.org/github.com/golang/glog) logging library. Developer conventions for logging severity are described in [docs/devel/logging.md](../devel/logging.md).", "commid": "kubernetes_pr_16328"}], "negative_passages": []} {"query_id": "q-en-kubernetes-cc8f7dd240ce03e902a3a6988aef93cbe00593e0f5081bd8380b06a0a728c5de", "query": "Currently watches only log when they complete, which can make analyzing kube-apiserver log files a bit tricky. (did component x never start watching after that list or did apiserver shut down?)", "positive_passages": [{"docid": "doc-en-kubernetes-48a353894265358f5dd25b1305911e3a8800bf102bd9b690eae3ba9dd32ad7e6", "text": "} if (opts.Watch || forceWatch) && rw != nil { glog.Infof(\"Started to log from %v for %v\", ctx, req.Request.URL.RequestURI()) watcher, err := rw.Watch(ctx, &opts) if err != nil { scope.err(err, res.ResponseWriter, req.Request)", "commid": "kubernetes_pr_37273"}], "negative_passages": []} {"query_id": "q-en-kubernetes-1b90ac0e27eb7112950da48e761bdf56abe3dcf0ad8451e1546669339b5f8d17", "query": "We have found that on 1k nodes cluster scheduling takes 60-200 ms and becomes the bottleneck. We want to have a standard way to reproduce scheduling latency metrics result and benchmark scheduler as simple and fast as possible. Only with that can we: 1. figure out underlying behaviors and potential bottlenecks quickly; 2. publish results in a standard way and prevent future performance degradation. We have the following goals: Goal 1: Save time on testing. More accurate scheduling latency. We mock up kubelets and only measure scheduling latency and do not care about the latency to start the pod. This is different from Kubemark -- scheduled pods won\u2019t be run by mocked kubelets. We only need to set up the master components. Goal 2: Profiling runtime metrics and find out performance bottleneck. Write scheduler integration test. Take advantage of go test and profiling tools, like cpu-profiling, memory-profiling and block-profiling. This is very fine-grained metrics. Goal 3: Reproduce test result easily. We want to have a known place to do the performance related test for scheduler. Developers should just run one script to collect all the information they need. We have the following plans for writing scheduler tests: density test (by adding a new Go test) schedule 30k pods on 1000 (fake) nodes and 3k pods on 100 (fake) nodes print out scheduling rate every second let you learn the rate changes vs number of scheduled pods benchmark (by modifying the existing benchmark in integration or simply add a new one) make use of and report nanosecond/op. schedule b.N pods when the cluster has N nodes and P scheduled pods. Since it takes relatively long time to finish one round, b.N is small -- level of 10. We are new to k8s codebase. We also want to know where is the best place (directory) to commit in these tests? Any suggestions are welcomed!\n/cc\nThis extends\nSo density already does this... the one option it looks like you may want is custom control , real or fake is irrelevant as the e2e's just need to point to an api-server. re: benchmarks, seems like we could simply add to the unit-tests.\nAfter chatting with lets add this as an agenda item for the SIG meeting tomorrow.\n+1 for adding benchmark (howeveer this should be integration test, not unit-test in my opinion)\ncc The next SIG on scheduler meeting is on 12.07, isn't it? If there is one tomorrow, can I have the information please?\nretract, +1 to . I was adding this to the list given the context, but it crosses over into both.\nSlides from today's meeting:\nHere are are the action item notes from the SIG meeting: Everyone agrees this is a good idea and long overdue, but the question is where would this live: New category of tests? (isolated/sandbox\u2019d/component) The current testing structure allows to easily append. create new directory (component) + hack/script to execute Future \"we could\" gate on known good data, and fail performance regressions\n/cc\n/cc", "positive_passages": [{"docid": "doc-en-kubernetes-ae6f0afab15f5950f8fcde4833d6d37d3ed9ddb0a4ec362aec9f4e02741981c8", "text": "-o -path './test/e2e/*' -o -path './test/e2e_node/*' -o -path './test/integration/*' -o -path './test/component/scheduler/perf/*' ) -prune ) -name '*_test.go' -print0 | xargs -0n1 dirname | sed 's|^./||' | sort -u )", "commid": "kubernetes_pr_18458"}], "negative_passages": []} {"query_id": "q-en-kubernetes-1b90ac0e27eb7112950da48e761bdf56abe3dcf0ad8451e1546669339b5f8d17", "query": "We have found that on 1k nodes cluster scheduling takes 60-200 ms and becomes the bottleneck. We want to have a standard way to reproduce scheduling latency metrics result and benchmark scheduler as simple and fast as possible. Only with that can we: 1. figure out underlying behaviors and potential bottlenecks quickly; 2. publish results in a standard way and prevent future performance degradation. We have the following goals: Goal 1: Save time on testing. More accurate scheduling latency. We mock up kubelets and only measure scheduling latency and do not care about the latency to start the pod. This is different from Kubemark -- scheduled pods won\u2019t be run by mocked kubelets. We only need to set up the master components. Goal 2: Profiling runtime metrics and find out performance bottleneck. Write scheduler integration test. Take advantage of go test and profiling tools, like cpu-profiling, memory-profiling and block-profiling. This is very fine-grained metrics. Goal 3: Reproduce test result easily. We want to have a known place to do the performance related test for scheduler. Developers should just run one script to collect all the information they need. We have the following plans for writing scheduler tests: density test (by adding a new Go test) schedule 30k pods on 1000 (fake) nodes and 3k pods on 100 (fake) nodes print out scheduling rate every second let you learn the rate changes vs number of scheduled pods benchmark (by modifying the existing benchmark in integration or simply add a new one) make use of and report nanosecond/op. schedule b.N pods when the cluster has N nodes and P scheduled pods. Since it takes relatively long time to finish one round, b.N is small -- level of 10. We are new to k8s codebase. We also want to know where is the best place (directory) to commit in these tests? Any suggestions are welcomed!\n/cc\nThis extends\nSo density already does this... the one option it looks like you may want is custom control , real or fake is irrelevant as the e2e's just need to point to an api-server. re: benchmarks, seems like we could simply add to the unit-tests.\nAfter chatting with lets add this as an agenda item for the SIG meeting tomorrow.\n+1 for adding benchmark (howeveer this should be integration test, not unit-test in my opinion)\ncc The next SIG on scheduler meeting is on 12.07, isn't it? If there is one tomorrow, can I have the information please?\nretract, +1 to . I was adding this to the list given the context, but it crosses over into both.\nSlides from today's meeting:\nHere are are the action item notes from the SIG meeting: Everyone agrees this is a good idea and long overdue, but the question is where would this live: New category of tests? (isolated/sandbox\u2019d/component) The current testing structure allows to easily append. create new directory (component) + hack/script to execute Future \"we could\" gate on known good data, and fail performance regressions\n/cc\n/cc", "positive_passages": [{"docid": "doc-en-kubernetes-8de89b8c6a4e02604401db70412de1e7eca7d6c16dfb784922be1063a768154a", "text": " \"WARNING\" \"WARNING\" \"WARNING\" \"WARNING\" \"WARNING\"

PLEASE NOTE: This document applies to the HEAD of the source tree

If you are using a released version of Kubernetes, you should refer to the docs that go with that version. The latest release of this document can be found [here](http://releases.k8s.io/release-1.1/docs/proposals/choosing-scheduler.md). Documentation for other releases can be found at [releases.k8s.io](http://releases.k8s.io). -- Scheduler Performance Test ====== Motivation ------ We already have a performance testing system -- Kubemark. However, Kubemark requires setting up and bootstrapping a whole cluster, which takes a lot of time. We want to have a standard way to reproduce scheduling latency metrics result and benchmark scheduler as simple and fast as possible. We have the following goals: - Save time on testing - The test and benchmark can be run in a single box. We only set up components necessary to scheduling without booting up a cluster. - Profiling runtime metrics to find out bottleneck - Write scheduler integration test but focus on performance measurement. Take advantage of go profiling tools and collect fine-grained metrics, like cpu-profiling, memory-profiling and block-profiling. - Reproduce test result easily - We want to have a known place to do the performance related test for scheduler. Developers should just run one script to collect all the information they need. Currently the test suite has the following: - density test (by adding a new Go test) - schedule 30k pods on 1000 (fake) nodes and 3k pods on 100 (fake) nodes - print out scheduling rate every second - let you learn the rate changes vs number of scheduled pods - benchmark - make use of `go test -bench` and report nanosecond/op. - schedule b.N pods when the cluster has N nodes and P scheduled pods. Since it takes relatively long time to finish one round, b.N is small: 10 - 100. How To Run ------ ``` cd kubernetes/test/component/scheduler/perf ./test-performance.sh ``` [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/test/component/scheduler/perf/README.md?pixel)]()
", "commid": "kubernetes_pr_18458"}], "negative_passages": []} {"query_id": "q-en-kubernetes-1b90ac0e27eb7112950da48e761bdf56abe3dcf0ad8451e1546669339b5f8d17", "query": "We have found that on 1k nodes cluster scheduling takes 60-200 ms and becomes the bottleneck. We want to have a standard way to reproduce scheduling latency metrics result and benchmark scheduler as simple and fast as possible. Only with that can we: 1. figure out underlying behaviors and potential bottlenecks quickly; 2. publish results in a standard way and prevent future performance degradation. We have the following goals: Goal 1: Save time on testing. More accurate scheduling latency. We mock up kubelets and only measure scheduling latency and do not care about the latency to start the pod. This is different from Kubemark -- scheduled pods won\u2019t be run by mocked kubelets. We only need to set up the master components. Goal 2: Profiling runtime metrics and find out performance bottleneck. Write scheduler integration test. Take advantage of go test and profiling tools, like cpu-profiling, memory-profiling and block-profiling. This is very fine-grained metrics. Goal 3: Reproduce test result easily. We want to have a known place to do the performance related test for scheduler. Developers should just run one script to collect all the information they need. We have the following plans for writing scheduler tests: density test (by adding a new Go test) schedule 30k pods on 1000 (fake) nodes and 3k pods on 100 (fake) nodes print out scheduling rate every second let you learn the rate changes vs number of scheduled pods benchmark (by modifying the existing benchmark in integration or simply add a new one) make use of and report nanosecond/op. schedule b.N pods when the cluster has N nodes and P scheduled pods. Since it takes relatively long time to finish one round, b.N is small -- level of 10. We are new to k8s codebase. We also want to know where is the best place (directory) to commit in these tests? Any suggestions are welcomed!\n/cc\nThis extends\nSo density already does this... the one option it looks like you may want is custom control , real or fake is irrelevant as the e2e's just need to point to an api-server. re: benchmarks, seems like we could simply add to the unit-tests.\nAfter chatting with lets add this as an agenda item for the SIG meeting tomorrow.\n+1 for adding benchmark (howeveer this should be integration test, not unit-test in my opinion)\ncc The next SIG on scheduler meeting is on 12.07, isn't it? If there is one tomorrow, can I have the information please?\nretract, +1 to . I was adding this to the list given the context, but it crosses over into both.\nSlides from today's meeting:\nHere are are the action item notes from the SIG meeting: Everyone agrees this is a good idea and long overdue, but the question is where would this live: New category of tests? (isolated/sandbox\u2019d/component) The current testing structure allows to easily append. create new directory (component) + hack/script to execute Future \"we could\" gate on known good data, and fail performance regressions\n/cc\n/cc", "positive_passages": [{"docid": "doc-en-kubernetes-63a9f94c7e22642097e3c1d84560088b8c72d74d97b74eba79b0875f49c2b35e", "text": " /* Copyright 2015 The Kubernetes Authors All rights reserved. Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ package benchmark import ( \"testing\" \"time\" ) // BenchmarkScheduling100Nodes0Pods benchmarks the scheduling rate // when the cluster has 100 nodes and 0 scheduled pods func BenchmarkScheduling100Nodes0Pods(b *testing.B) { benchmarkScheduling(100, 0, b) } // BenchmarkScheduling100Nodes1000Pods benchmarks the scheduling rate // when the cluster has 100 nodes and 1000 scheduled pods func BenchmarkScheduling100Nodes1000Pods(b *testing.B) { benchmarkScheduling(100, 1000, b) } // BenchmarkScheduling1000Nodes0Pods benchmarks the scheduling rate // when the cluster has 1000 nodes and 0 scheduled pods func BenchmarkScheduling1000Nodes0Pods(b *testing.B) { benchmarkScheduling(1000, 0, b) } // BenchmarkScheduling1000Nodes1000Pods benchmarks the scheduling rate // when the cluster has 1000 nodes and 1000 scheduled pods func BenchmarkScheduling1000Nodes1000Pods(b *testing.B) { benchmarkScheduling(1000, 1000, b) } // benchmarkScheduling benchmarks scheduling rate with specific number of nodes // and specific number of pods already scheduled. Since an operation takes relatively // long time, b.N should be small: 10 - 100. func benchmarkScheduling(numNodes, numScheduledPods int, b *testing.B) { schedulerConfigFactory, finalFunc := mustSetupScheduler() defer finalFunc() c := schedulerConfigFactory.Client makeNodes(c, numNodes) makePods(c, numScheduledPods) for { scheduled := schedulerConfigFactory.ScheduledPodLister.Store.List() if len(scheduled) >= numScheduledPods { break } time.Sleep(1 * time.Second) } // start benchmark b.ResetTimer() makePods(c, b.N) for { // This can potentially affect performance of scheduler, since List() is done under mutex. // TODO: Setup watch on apiserver and wait until all pods scheduled. scheduled := schedulerConfigFactory.ScheduledPodLister.Store.List() if len(scheduled) >= numScheduledPods+b.N { break } // Note: This might introduce slight deviation in accuracy of benchmark results. // Since the total amount of time is relatively large, it might not be a concern. time.Sleep(100 * time.Millisecond) } } ", "commid": "kubernetes_pr_18458"}], "negative_passages": []} {"query_id": "q-en-kubernetes-1b90ac0e27eb7112950da48e761bdf56abe3dcf0ad8451e1546669339b5f8d17", "query": "We have found that on 1k nodes cluster scheduling takes 60-200 ms and becomes the bottleneck. We want to have a standard way to reproduce scheduling latency metrics result and benchmark scheduler as simple and fast as possible. Only with that can we: 1. figure out underlying behaviors and potential bottlenecks quickly; 2. publish results in a standard way and prevent future performance degradation. We have the following goals: Goal 1: Save time on testing. More accurate scheduling latency. We mock up kubelets and only measure scheduling latency and do not care about the latency to start the pod. This is different from Kubemark -- scheduled pods won\u2019t be run by mocked kubelets. We only need to set up the master components. Goal 2: Profiling runtime metrics and find out performance bottleneck. Write scheduler integration test. Take advantage of go test and profiling tools, like cpu-profiling, memory-profiling and block-profiling. This is very fine-grained metrics. Goal 3: Reproduce test result easily. We want to have a known place to do the performance related test for scheduler. Developers should just run one script to collect all the information they need. We have the following plans for writing scheduler tests: density test (by adding a new Go test) schedule 30k pods on 1000 (fake) nodes and 3k pods on 100 (fake) nodes print out scheduling rate every second let you learn the rate changes vs number of scheduled pods benchmark (by modifying the existing benchmark in integration or simply add a new one) make use of and report nanosecond/op. schedule b.N pods when the cluster has N nodes and P scheduled pods. Since it takes relatively long time to finish one round, b.N is small -- level of 10. We are new to k8s codebase. We also want to know where is the best place (directory) to commit in these tests? Any suggestions are welcomed!\n/cc\nThis extends\nSo density already does this... the one option it looks like you may want is custom control , real or fake is irrelevant as the e2e's just need to point to an api-server. re: benchmarks, seems like we could simply add to the unit-tests.\nAfter chatting with lets add this as an agenda item for the SIG meeting tomorrow.\n+1 for adding benchmark (howeveer this should be integration test, not unit-test in my opinion)\ncc The next SIG on scheduler meeting is on 12.07, isn't it? If there is one tomorrow, can I have the information please?\nretract, +1 to . I was adding this to the list given the context, but it crosses over into both.\nSlides from today's meeting:\nHere are are the action item notes from the SIG meeting: Everyone agrees this is a good idea and long overdue, but the question is where would this live: New category of tests? (isolated/sandbox\u2019d/component) The current testing structure allows to easily append. create new directory (component) + hack/script to execute Future \"we could\" gate on known good data, and fail performance regressions\n/cc\n/cc", "positive_passages": [{"docid": "doc-en-kubernetes-d319af9459a1bddce3ef672a40a372b1a46ae2f612490b0de8a51ed5ef2c1406", "text": " /* Copyright 2015 The Kubernetes Authors All rights reserved. Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ package benchmark import ( \"fmt\" \"testing\" \"time\" ) // TestSchedule100Node3KPods schedules 3k pods on 100 nodes. func TestSchedule100Node3KPods(t *testing.T) { schedulePods(100, 3000) } // TestSchedule1000Node30KPods schedules 30k pods on 1000 nodes. func TestSchedule1000Node30KPods(t *testing.T) { schedulePods(1000, 30000) } // schedulePods schedules specific number of pods on specific number of nodes. // This is used to learn the scheduling throughput on various // sizes of cluster and changes as more and more pods are scheduled. // It won't stop until all pods are scheduled. func schedulePods(numNodes, numPods int) { schedulerConfigFactory, destroyFunc := mustSetupScheduler() defer destroyFunc() c := schedulerConfigFactory.Client makeNodes(c, numNodes) makePods(c, numPods) prev := 0 start := time.Now() for { // This can potentially affect performance of scheduler, since List() is done under mutex. // Listing 10000 pods is an expensive operation, so running it frequently may impact scheduler. // TODO: Setup watch on apiserver and wait until all pods scheduled. scheduled := schedulerConfigFactory.ScheduledPodLister.Store.List() fmt.Printf(\"%dstrate: %dttotal: %dn\", time.Since(start)/time.Second, len(scheduled)-prev, len(scheduled)) if len(scheduled) >= numPods { return } prev = len(scheduled) time.Sleep(1 * time.Second) } } ", "commid": "kubernetes_pr_18458"}], "negative_passages": []} {"query_id": "q-en-kubernetes-1b90ac0e27eb7112950da48e761bdf56abe3dcf0ad8451e1546669339b5f8d17", "query": "We have found that on 1k nodes cluster scheduling takes 60-200 ms and becomes the bottleneck. We want to have a standard way to reproduce scheduling latency metrics result and benchmark scheduler as simple and fast as possible. Only with that can we: 1. figure out underlying behaviors and potential bottlenecks quickly; 2. publish results in a standard way and prevent future performance degradation. We have the following goals: Goal 1: Save time on testing. More accurate scheduling latency. We mock up kubelets and only measure scheduling latency and do not care about the latency to start the pod. This is different from Kubemark -- scheduled pods won\u2019t be run by mocked kubelets. We only need to set up the master components. Goal 2: Profiling runtime metrics and find out performance bottleneck. Write scheduler integration test. Take advantage of go test and profiling tools, like cpu-profiling, memory-profiling and block-profiling. This is very fine-grained metrics. Goal 3: Reproduce test result easily. We want to have a known place to do the performance related test for scheduler. Developers should just run one script to collect all the information they need. We have the following plans for writing scheduler tests: density test (by adding a new Go test) schedule 30k pods on 1000 (fake) nodes and 3k pods on 100 (fake) nodes print out scheduling rate every second let you learn the rate changes vs number of scheduled pods benchmark (by modifying the existing benchmark in integration or simply add a new one) make use of and report nanosecond/op. schedule b.N pods when the cluster has N nodes and P scheduled pods. Since it takes relatively long time to finish one round, b.N is small -- level of 10. We are new to k8s codebase. We also want to know where is the best place (directory) to commit in these tests? Any suggestions are welcomed!\n/cc\nThis extends\nSo density already does this... the one option it looks like you may want is custom control , real or fake is irrelevant as the e2e's just need to point to an api-server. re: benchmarks, seems like we could simply add to the unit-tests.\nAfter chatting with lets add this as an agenda item for the SIG meeting tomorrow.\n+1 for adding benchmark (howeveer this should be integration test, not unit-test in my opinion)\ncc The next SIG on scheduler meeting is on 12.07, isn't it? If there is one tomorrow, can I have the information please?\nretract, +1 to . I was adding this to the list given the context, but it crosses over into both.\nSlides from today's meeting:\nHere are are the action item notes from the SIG meeting: Everyone agrees this is a good idea and long overdue, but the question is where would this live: New category of tests? (isolated/sandbox\u2019d/component) The current testing structure allows to easily append. create new directory (component) + hack/script to execute Future \"we could\" gate on known good data, and fail performance regressions\n/cc\n/cc", "positive_passages": [{"docid": "doc-en-kubernetes-1f73cfb3474ce8780fbdde136e86c355075a32f7e6d90dc9ffd41e79f895da29", "text": " #!/usr/bin/env bash # Copyright 2014 The Kubernetes Authors All rights reserved. # # Licensed under the Apache License, Version 2.0 (the \"License\"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an \"AS IS\" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. set -o errexit set -o nounset set -o pipefail pushd \"../../../..\" source \"./hack/lib/util.sh\" source \"./hack/lib/logging.sh\" source \"./hack/lib/etcd.sh\" popd cleanup() { kube::etcd::cleanup kube::log::status \"performance test cleanup complete\" } trap cleanup EXIT kube::etcd::start kube::log::status \"performance test start\" # TODO: set log-dir and prof output dir. DIR_BASENAME=$(basename `pwd`) go test -c -o \"${DIR_BASENAME}.test\" # We are using the benchmark suite to do profiling. Because it only runs a few pods and # theoretically it has less variance. \"./${DIR_BASENAME}.test\" -test.bench=. -test.run=xxxx -test.cpuprofile=prof.out -logtostderr=false kube::log::status \"benchmark tests finished\" # Running density tests. It might take a long time. \"./${DIR_BASENAME}.test\" -test.run=. -test.timeout=60m kube::log::status \"density tests finished\" ", "commid": "kubernetes_pr_18458"}], "negative_passages": []} {"query_id": "q-en-kubernetes-1b90ac0e27eb7112950da48e761bdf56abe3dcf0ad8451e1546669339b5f8d17", "query": "We have found that on 1k nodes cluster scheduling takes 60-200 ms and becomes the bottleneck. We want to have a standard way to reproduce scheduling latency metrics result and benchmark scheduler as simple and fast as possible. Only with that can we: 1. figure out underlying behaviors and potential bottlenecks quickly; 2. publish results in a standard way and prevent future performance degradation. We have the following goals: Goal 1: Save time on testing. More accurate scheduling latency. We mock up kubelets and only measure scheduling latency and do not care about the latency to start the pod. This is different from Kubemark -- scheduled pods won\u2019t be run by mocked kubelets. We only need to set up the master components. Goal 2: Profiling runtime metrics and find out performance bottleneck. Write scheduler integration test. Take advantage of go test and profiling tools, like cpu-profiling, memory-profiling and block-profiling. This is very fine-grained metrics. Goal 3: Reproduce test result easily. We want to have a known place to do the performance related test for scheduler. Developers should just run one script to collect all the information they need. We have the following plans for writing scheduler tests: density test (by adding a new Go test) schedule 30k pods on 1000 (fake) nodes and 3k pods on 100 (fake) nodes print out scheduling rate every second let you learn the rate changes vs number of scheduled pods benchmark (by modifying the existing benchmark in integration or simply add a new one) make use of and report nanosecond/op. schedule b.N pods when the cluster has N nodes and P scheduled pods. Since it takes relatively long time to finish one round, b.N is small -- level of 10. We are new to k8s codebase. We also want to know where is the best place (directory) to commit in these tests? Any suggestions are welcomed!\n/cc\nThis extends\nSo density already does this... the one option it looks like you may want is custom control , real or fake is irrelevant as the e2e's just need to point to an api-server. re: benchmarks, seems like we could simply add to the unit-tests.\nAfter chatting with lets add this as an agenda item for the SIG meeting tomorrow.\n+1 for adding benchmark (howeveer this should be integration test, not unit-test in my opinion)\ncc The next SIG on scheduler meeting is on 12.07, isn't it? If there is one tomorrow, can I have the information please?\nretract, +1 to . I was adding this to the list given the context, but it crosses over into both.\nSlides from today's meeting:\nHere are are the action item notes from the SIG meeting: Everyone agrees this is a good idea and long overdue, but the question is where would this live: New category of tests? (isolated/sandbox\u2019d/component) The current testing structure allows to easily append. create new directory (component) + hack/script to execute Future \"we could\" gate on known good data, and fail performance regressions\n/cc\n/cc", "positive_passages": [{"docid": "doc-en-kubernetes-e876ffb1d503f96c82449b539492e05a30e494a6d3c73115a786c6b8442ab652", "text": " /* Copyright 2015 The Kubernetes Authors All rights reserved. Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ package benchmark import ( \"net/http\" \"net/http/httptest\" \"github.com/golang/glog\" \"k8s.io/kubernetes/pkg/api\" \"k8s.io/kubernetes/pkg/api/resource\" \"k8s.io/kubernetes/pkg/api/testapi\" \"k8s.io/kubernetes/pkg/client/record\" client \"k8s.io/kubernetes/pkg/client/unversioned\" \"k8s.io/kubernetes/pkg/master\" \"k8s.io/kubernetes/plugin/pkg/scheduler\" _ \"k8s.io/kubernetes/plugin/pkg/scheduler/algorithmprovider\" \"k8s.io/kubernetes/plugin/pkg/scheduler/factory\" \"k8s.io/kubernetes/test/integration/framework\" ) // mustSetupScheduler starts the following components: // - k8s api server (a.k.a. master) // - scheduler // It returns scheduler config factory and destroyFunc which should be used to // remove resources after finished. // Notes on rate limiter: // - The BindPodsRateLimiter is nil, meaning no rate limits. // - client rate limit is set to 5000. func mustSetupScheduler() (schedulerConfigFactory *factory.ConfigFactory, destroyFunc func()) { framework.DeleteAllEtcdKeys() var m *master.Master masterConfig := framework.NewIntegrationTestMasterConfig() m = master.New(masterConfig) s := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, req *http.Request) { m.Handler.ServeHTTP(w, req) })) c := client.NewOrDie(&client.Config{ Host: s.URL, GroupVersion: testapi.Default.GroupVersion(), QPS: 5000.0, Burst: 5000, }) schedulerConfigFactory = factory.NewConfigFactory(c, nil) schedulerConfig, err := schedulerConfigFactory.Create() if err != nil { panic(\"Couldn't create scheduler config\") } eventBroadcaster := record.NewBroadcaster() schedulerConfig.Recorder = eventBroadcaster.NewRecorder(api.EventSource{Component: \"scheduler\"}) eventBroadcaster.StartRecordingToSink(c.Events(\"\")) scheduler.New(schedulerConfig).Run() destroyFunc = func() { glog.Infof(\"destroying\") close(schedulerConfig.StopEverything) s.Close() glog.Infof(\"destroyed\") } return } func makeNodes(c client.Interface, nodeCount int) { glog.Infof(\"making %d nodes\", nodeCount) baseNode := &api.Node{ ObjectMeta: api.ObjectMeta{ GenerateName: \"scheduler-test-node-\", }, Spec: api.NodeSpec{ ExternalID: \"foobar\", }, Status: api.NodeStatus{ Capacity: api.ResourceList{ api.ResourcePods: *resource.NewQuantity(32, resource.DecimalSI), api.ResourceCPU: resource.MustParse(\"4\"), api.ResourceMemory: resource.MustParse(\"32Gi\"), }, Phase: api.NodeRunning, Conditions: []api.NodeCondition{ {Type: api.NodeReady, Status: api.ConditionTrue}, }, }, } for i := 0; i < nodeCount; i++ { if _, err := c.Nodes().Create(baseNode); err != nil { panic(\"error creating node: \" + err.Error()) } } } // makePods will setup specified number of scheduled pods. // Currently it goes through scheduling path and it's very slow to setup large number of pods. // TODO: Setup pods evenly on all nodes and quickly/non-linearly. func makePods(c client.Interface, podCount int) { glog.Infof(\"making %d pods\", podCount) basePod := &api.Pod{ ObjectMeta: api.ObjectMeta{ GenerateName: \"scheduler-test-pod-\", }, Spec: api.PodSpec{ Containers: []api.Container{{ Name: \"pause\", Image: \"gcr.io/google_containers/pause:1.0\", Resources: api.ResourceRequirements{ Limits: api.ResourceList{ api.ResourceCPU: resource.MustParse(\"100m\"), api.ResourceMemory: resource.MustParse(\"500Mi\"), }, Requests: api.ResourceList{ api.ResourceCPU: resource.MustParse(\"100m\"), api.ResourceMemory: resource.MustParse(\"500Mi\"), }, }, }}, }, } threads := 30 remaining := make(chan int, 1000) go func() { for i := 0; i < podCount; i++ { remaining <- i } close(remaining) }() for i := 0; i < threads; i++ { go func() { for { _, ok := <-remaining if !ok { return } for { _, err := c.Pods(\"default\").Create(basePod) if err == nil { break } } } }() } } ", "commid": "kubernetes_pr_18458"}], "negative_passages": []} {"query_id": "q-en-kubernetes-1b90ac0e27eb7112950da48e761bdf56abe3dcf0ad8451e1546669339b5f8d17", "query": "We have found that on 1k nodes cluster scheduling takes 60-200 ms and becomes the bottleneck. We want to have a standard way to reproduce scheduling latency metrics result and benchmark scheduler as simple and fast as possible. Only with that can we: 1. figure out underlying behaviors and potential bottlenecks quickly; 2. publish results in a standard way and prevent future performance degradation. We have the following goals: Goal 1: Save time on testing. More accurate scheduling latency. We mock up kubelets and only measure scheduling latency and do not care about the latency to start the pod. This is different from Kubemark -- scheduled pods won\u2019t be run by mocked kubelets. We only need to set up the master components. Goal 2: Profiling runtime metrics and find out performance bottleneck. Write scheduler integration test. Take advantage of go test and profiling tools, like cpu-profiling, memory-profiling and block-profiling. This is very fine-grained metrics. Goal 3: Reproduce test result easily. We want to have a known place to do the performance related test for scheduler. Developers should just run one script to collect all the information they need. We have the following plans for writing scheduler tests: density test (by adding a new Go test) schedule 30k pods on 1000 (fake) nodes and 3k pods on 100 (fake) nodes print out scheduling rate every second let you learn the rate changes vs number of scheduled pods benchmark (by modifying the existing benchmark in integration or simply add a new one) make use of and report nanosecond/op. schedule b.N pods when the cluster has N nodes and P scheduled pods. Since it takes relatively long time to finish one round, b.N is small -- level of 10. We are new to k8s codebase. We also want to know where is the best place (directory) to commit in these tests? Any suggestions are welcomed!\n/cc\nThis extends\nSo density already does this... the one option it looks like you may want is custom control , real or fake is irrelevant as the e2e's just need to point to an api-server. re: benchmarks, seems like we could simply add to the unit-tests.\nAfter chatting with lets add this as an agenda item for the SIG meeting tomorrow.\n+1 for adding benchmark (howeveer this should be integration test, not unit-test in my opinion)\ncc The next SIG on scheduler meeting is on 12.07, isn't it? If there is one tomorrow, can I have the information please?\nretract, +1 to . I was adding this to the list given the context, but it crosses over into both.\nSlides from today's meeting:\nHere are are the action item notes from the SIG meeting: Everyone agrees this is a good idea and long overdue, but the question is where would this live: New category of tests? (isolated/sandbox\u2019d/component) The current testing structure allows to easily append. create new directory (component) + hack/script to execute Future \"we could\" gate on known good data, and fail performance regressions\n/cc\n/cc", "positive_passages": [{"docid": "doc-en-kubernetes-362efe3c289a2c4a353dcba34808269a82ddc616124e333476a68af0cc74238c", "text": "\"fmt\" \"net/http\" \"net/http/httptest\" \"sync\" \"testing\" \"time\"", "commid": "kubernetes_pr_18458"}], "negative_passages": []} {"query_id": "q-en-kubernetes-1b90ac0e27eb7112950da48e761bdf56abe3dcf0ad8451e1546669339b5f8d17", "query": "We have found that on 1k nodes cluster scheduling takes 60-200 ms and becomes the bottleneck. We want to have a standard way to reproduce scheduling latency metrics result and benchmark scheduler as simple and fast as possible. Only with that can we: 1. figure out underlying behaviors and potential bottlenecks quickly; 2. publish results in a standard way and prevent future performance degradation. We have the following goals: Goal 1: Save time on testing. More accurate scheduling latency. We mock up kubelets and only measure scheduling latency and do not care about the latency to start the pod. This is different from Kubemark -- scheduled pods won\u2019t be run by mocked kubelets. We only need to set up the master components. Goal 2: Profiling runtime metrics and find out performance bottleneck. Write scheduler integration test. Take advantage of go test and profiling tools, like cpu-profiling, memory-profiling and block-profiling. This is very fine-grained metrics. Goal 3: Reproduce test result easily. We want to have a known place to do the performance related test for scheduler. Developers should just run one script to collect all the information they need. We have the following plans for writing scheduler tests: density test (by adding a new Go test) schedule 30k pods on 1000 (fake) nodes and 3k pods on 100 (fake) nodes print out scheduling rate every second let you learn the rate changes vs number of scheduled pods benchmark (by modifying the existing benchmark in integration or simply add a new one) make use of and report nanosecond/op. schedule b.N pods when the cluster has N nodes and P scheduled pods. Since it takes relatively long time to finish one round, b.N is small -- level of 10. We are new to k8s codebase. We also want to know where is the best place (directory) to commit in these tests? Any suggestions are welcomed!\n/cc\nThis extends\nSo density already does this... the one option it looks like you may want is custom control , real or fake is irrelevant as the e2e's just need to point to an api-server. re: benchmarks, seems like we could simply add to the unit-tests.\nAfter chatting with lets add this as an agenda item for the SIG meeting tomorrow.\n+1 for adding benchmark (howeveer this should be integration test, not unit-test in my opinion)\ncc The next SIG on scheduler meeting is on 12.07, isn't it? If there is one tomorrow, can I have the information please?\nretract, +1 to . I was adding this to the list given the context, but it crosses over into both.\nSlides from today's meeting:\nHere are are the action item notes from the SIG meeting: Everyone agrees this is a good idea and long overdue, but the question is where would this live: New category of tests? (isolated/sandbox\u2019d/component) The current testing structure allows to easily append. create new directory (component) + hack/script to execute Future \"we could\" gate on known good data, and fail performance regressions\n/cc\n/cc", "positive_passages": [{"docid": "doc-en-kubernetes-680297b49a215c9cfb1528ef4979c0d18e911bef81f67b1fd58565fd68978733", "text": "} } } func BenchmarkScheduling(b *testing.B) { framework.DeleteAllEtcdKeys() var m *master.Master s := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, req *http.Request) { m.Handler.ServeHTTP(w, req) })) defer s.Close() masterConfig := framework.NewIntegrationTestMasterConfig() m = master.New(masterConfig) c := client.NewOrDie(&client.Config{ Host: s.URL, GroupVersion: testapi.Default.GroupVersion(), QPS: 5000.0, Burst: 5000, }) schedulerConfigFactory := factory.NewConfigFactory(c, nil) schedulerConfig, err := schedulerConfigFactory.Create() if err != nil { b.Fatalf(\"Couldn't create scheduler config: %v\", err) } eventBroadcaster := record.NewBroadcaster() schedulerConfig.Recorder = eventBroadcaster.NewRecorder(api.EventSource{Component: \"scheduler\"}) eventBroadcaster.StartRecordingToSink(c.Events(\"\")) scheduler.New(schedulerConfig).Run() defer close(schedulerConfig.StopEverything) makeNNodes(c, 1000) N := b.N b.ResetTimer() makeNPods(c, N) for { objs := schedulerConfigFactory.ScheduledPodLister.Store.List() if len(objs) >= N { fmt.Printf(\"%v pods scheduled.n\", len(objs)) /* // To prove that this actually works: for _, o := range objs { fmt.Printf(\"%sn\", o.(*api.Pod).Spec.NodeName) } */ break } time.Sleep(time.Millisecond) } b.StopTimer() } func makeNNodes(c client.Interface, N int) { baseNode := &api.Node{ ObjectMeta: api.ObjectMeta{ GenerateName: \"scheduler-test-node-\", }, Spec: api.NodeSpec{ ExternalID: \"foobar\", }, Status: api.NodeStatus{ Capacity: api.ResourceList{ api.ResourcePods: *resource.NewQuantity(32, resource.DecimalSI), api.ResourceCPU: resource.MustParse(\"4\"), api.ResourceMemory: resource.MustParse(\"32Gi\"), }, Phase: api.NodeRunning, Conditions: []api.NodeCondition{ {Type: api.NodeReady, Status: api.ConditionTrue}, }, }, } for i := 0; i < N; i++ { if _, err := c.Nodes().Create(baseNode); err != nil { panic(\"error creating node: \" + err.Error()) } } } func makeNPods(c client.Interface, N int) { basePod := &api.Pod{ ObjectMeta: api.ObjectMeta{ GenerateName: \"scheduler-test-pod-\", }, Spec: api.PodSpec{ Containers: []api.Container{{ Name: \"pause\", Image: \"gcr.io/google_containers/pause:1.0\", Resources: api.ResourceRequirements{ Limits: api.ResourceList{ api.ResourceCPU: resource.MustParse(\"100m\"), api.ResourceMemory: resource.MustParse(\"500Mi\"), }, Requests: api.ResourceList{ api.ResourceCPU: resource.MustParse(\"100m\"), api.ResourceMemory: resource.MustParse(\"500Mi\"), }, }, }}, }, } wg := sync.WaitGroup{} threads := 30 wg.Add(threads) remaining := make(chan int, N) go func() { for i := 0; i < N; i++ { remaining <- i } close(remaining) }() for i := 0; i < threads; i++ { go func() { defer wg.Done() for { _, ok := <-remaining if !ok { return } for { _, err := c.Pods(\"default\").Create(basePod) if err == nil { break } } } }() } wg.Wait() } ", "commid": "kubernetes_pr_18458"}], "negative_passages": []} {"query_id": "q-en-kubernetes-8a1317e390e9571143fd6d82d56b72619936bbd931877828728e077c4c54d6b4", "query": "I'm running application with several env variables. One of them has a key ending with ==. Another variable has a comma separated list of email addresses. While perfectly valid on *nix and windows, we can't pass these values into Kubernetes run command. It fails with \"error: invalid env:\" Example command: The parsing in should be more robust\nI volunteer myself to fix this.\nMaybe they can be passed from file? I mean s'rsly.. you need to lits 20 variables by hand?? I've specified them in yaml file already, but kubectl-run have no way to specify it.. without this, deployment is totally impossible when you need to run migrations first!", "positive_passages": [{"docid": "doc-en-kubernetes-1f04cbfb325986150059894c05be7f10e540b95dc139bf8c3ae60ecb20a8c994", "text": "func parseEnvs(envArray []string) ([]api.EnvVar, error) { envs := []api.EnvVar{} for _, env := range envArray { parts := strings.Split(env, \"=\") if len(parts) != 2 || !validation.IsCIdentifier(parts[0]) || len(parts[1]) == 0 { pos := strings.Index(env, \"=\") if pos == -1 { return nil, fmt.Errorf(\"invalid env: %v\", env) } envVar := api.EnvVar{Name: parts[0], Value: parts[1]} name := env[:pos] value := env[pos+1:] if len(name) == 0 || !validation.IsCIdentifier(name) || len(value) == 0 { return nil, fmt.Errorf(\"invalid env: %v\", env) } envVar := api.EnvVar{Name: name, Value: value} envs = append(envs, envVar) } return envs, nil", "commid": "kubernetes_pr_18997"}], "negative_passages": []} {"query_id": "q-en-kubernetes-8a1317e390e9571143fd6d82d56b72619936bbd931877828728e077c4c54d6b4", "query": "I'm running application with several env variables. One of them has a key ending with ==. Another variable has a comma separated list of email addresses. While perfectly valid on *nix and windows, we can't pass these values into Kubernetes run command. It fails with \"error: invalid env:\" Example command: The parsing in should be more robust\nI volunteer myself to fix this.\nMaybe they can be passed from file? I mean s'rsly.. you need to lits 20 variables by hand?? I've specified them in yaml file already, but kubectl-run have no way to specify it.. without this, deployment is totally impossible when you need to run migrations first!", "positive_passages": [{"docid": "doc-en-kubernetes-f170dd73baf91cbb8b47cb5be66947022a231dc71697b76165f3c81a356329e9", "text": "} } } func TestParseEnv(t *testing.T) { tests := []struct { envArray []string expected []api.EnvVar expectErr bool test string }{ { envArray: []string{ \"THIS_ENV=isOK\", \"HAS_COMMAS=foo,bar\", \"HAS_EQUALS=jJnro54iUu75xNy==\", }, expected: []api.EnvVar{ { Name: \"THIS_ENV\", Value: \"isOK\", }, { Name: \"HAS_COMMAS\", Value: \"foo,bar\", }, { Name: \"HAS_EQUALS\", Value: \"jJnro54iUu75xNy==\", }, }, expectErr: false, test: \"test case 1\", }, { envArray: []string{ \"WITH_OUT_EQUALS\", }, expected: []api.EnvVar{}, expectErr: true, test: \"test case 2\", }, { envArray: []string{ \"WITH_OUT_VALUES=\", }, expected: []api.EnvVar{}, expectErr: true, test: \"test case 3\", }, { envArray: []string{ \"=WITH_OUT_NAME\", }, expected: []api.EnvVar{}, expectErr: true, test: \"test case 4\", }, } for _, test := range tests { envs, err := parseEnvs(test.envArray) if !test.expectErr && err != nil { t.Errorf(\"unexpected error: %v (%s)\", err, test.test) } if test.expectErr && err != nil { continue } if !reflect.DeepEqual(envs, test.expected) { t.Errorf(\"nexpected:n%#vnsaw:n%#v (%s)\", test.expected, envs, test.test) } } } ", "commid": "kubernetes_pr_18997"}], "negative_passages": []} {"query_id": "q-en-kubernetes-716ca225a252153ae82ca602b3860ed212a55f6220331030249e6fc7ae53f83d", "query": "Have not reproduced locally even after a few tens of runs.\nwhich version of Go are you using locally? This probably only repros on 1.5.1.\nThis was from Jenkins runs.\nThis is caused because an attempts counter is being read/written to without synchronization. I think this is caused by the event creation order not being guaranteed. I am looking a little deeper to track this down.\nGot a fix with 130k+ runs and no failures. Will send out PR shortly.", "positive_passages": [{"docid": "doc-en-kubernetes-011c6819da0d37c896c1382263d6beb095d37fa9a578eace4ab8c28bb736487d", "text": "eventCorrelator := NewEventCorrelator(util.RealClock{}) return eventBroadcaster.StartEventWatcher( func(event *api.Event) { // Make a copy before modification, because there could be multiple listeners. // Events are safe to copy like this. eventCopy := *event event = &eventCopy result, err := eventCorrelator.EventCorrelate(event) if err != nil { util.HandleError(err) } if result.Skip { return } tries := 0 for { if recordEvent(sink, result.Event, result.Patch, result.Event.Count > 1, eventCorrelator) { break } tries++ if tries >= maxTriesPerEvent { glog.Errorf(\"Unable to write event '%#v' (retry limit exceeded!)\", event) break } // Randomize the first sleep so that various clients won't all be // synced up if the master goes down. if tries == 1 { time.Sleep(time.Duration(float64(sleepDuration) * randGen.Float64())) } else { time.Sleep(sleepDuration) } } recordToSink(sink, event, eventCorrelator, randGen) }) } func recordToSink(sink EventSink, event *api.Event, eventCorrelator *EventCorrelator, randGen *rand.Rand) { // Make a copy before modification, because there could be multiple listeners. // Events are safe to copy like this. eventCopy := *event event = &eventCopy result, err := eventCorrelator.EventCorrelate(event) if err != nil { util.HandleError(err) } if result.Skip { return } tries := 0 for { if recordEvent(sink, result.Event, result.Patch, result.Event.Count > 1, eventCorrelator) { break } tries++ if tries >= maxTriesPerEvent { glog.Errorf(\"Unable to write event '%#v' (retry limit exceeded!)\", event) break } // Randomize the first sleep so that various clients won't all be // synced up if the master goes down. if tries == 1 { time.Sleep(time.Duration(float64(sleepDuration) * randGen.Float64())) } else { time.Sleep(sleepDuration) } } } func isKeyNotFoundError(err error) bool { statusErr, _ := err.(*errors.StatusError) // At the moment the server is returning 500 instead of a more specific", "commid": "kubernetes_pr_19368"}], "negative_passages": []} {"query_id": "q-en-kubernetes-716ca225a252153ae82ca602b3860ed212a55f6220331030249e6fc7ae53f83d", "query": "Have not reproduced locally even after a few tens of runs.\nwhich version of Go are you using locally? This probably only repros on 1.5.1.\nThis was from Jenkins runs.\nThis is caused because an attempts counter is being read/written to without synchronization. I think this is caused by the event creation order not being guaranteed. I am looking a little deeper to track this down.\nGot a fix with 130k+ runs and no failures. Will send out PR shortly.", "positive_passages": [{"docid": "doc-en-kubernetes-57ec4e332b5084ce82e4167e8a76c8b2120d36db60d48e45d20fd251394eaa1f", "text": "import ( \"encoding/json\" \"fmt\" \"runtime\" \"math/rand\" \"strconv\" \"testing\" \"time\"", "commid": "kubernetes_pr_19368"}], "negative_passages": []} {"query_id": "q-en-kubernetes-716ca225a252153ae82ca602b3860ed212a55f6220331030249e6fc7ae53f83d", "query": "Have not reproduced locally even after a few tens of runs.\nwhich version of Go are you using locally? This probably only repros on 1.5.1.\nThis was from Jenkins runs.\nThis is caused because an attempts counter is being read/written to without synchronization. I think this is caused by the event creation order not being guaranteed. I am looking a little deeper to track this down.\nGot a fix with 130k+ runs and no failures. Will send out PR shortly.", "positive_passages": [{"docid": "doc-en-kubernetes-e91b43f04052cb0e0f61297cd1bb6cdcc764f939b0a483ad7c047b0bd717bd8e", "text": "} func TestWriteEventError(t *testing.T) { ref := &api.ObjectReference{ Kind: \"Pod\", Name: \"foo\", Namespace: \"baz\", UID: \"bar\", APIVersion: \"version\", } type entry struct { timesToSendError int attemptsMade int attemptsWanted int err error }", "commid": "kubernetes_pr_19368"}], "negative_passages": []} {"query_id": "q-en-kubernetes-716ca225a252153ae82ca602b3860ed212a55f6220331030249e6fc7ae53f83d", "query": "Have not reproduced locally even after a few tens of runs.\nwhich version of Go are you using locally? This probably only repros on 1.5.1.\nThis was from Jenkins runs.\nThis is caused because an attempts counter is being read/written to without synchronization. I think this is caused by the event creation order not being guaranteed. I am looking a little deeper to track this down.\nGot a fix with 130k+ runs and no failures. Will send out PR shortly.", "positive_passages": [{"docid": "doc-en-kubernetes-30c38c98fd647e0012edaa7c556e7762ff4f622e8236e90d7b1fd82dd9b6247a", "text": "err: fmt.Errorf(\"A weird error\"), }, } done := make(chan struct{}) eventBroadcaster := NewBroadcaster() defer eventBroadcaster.StartRecordingToSink( &testEventSink{ eventCorrelator := NewEventCorrelator(util.RealClock{}) randGen := rand.New(rand.NewSource(time.Now().UnixNano())) for caseName, ent := range table { attempts := 0 sink := &testEventSink{ OnCreate: func(event *api.Event) (*api.Event, error) { if event.Message == \"finished\" { close(done) return event, nil } item, ok := table[event.Message] if !ok { t.Errorf(\"Unexpected event: %#v\", event) return event, nil } item.attemptsMade++ if item.attemptsMade < item.timesToSendError { return nil, item.err attempts++ if attempts < ent.timesToSendError { return nil, ent.err } return event, nil }, }, ).Stop() clock := &util.FakeClock{time.Now()} recorder := recorderWithFakeClock(api.EventSource{Component: \"eventTest\"}, eventBroadcaster, clock) for caseName := range table { clock.Step(1 * time.Second) recorder.Event(ref, api.EventTypeNormal, \"Reason\", caseName) runtime.Gosched() } recorder.Event(ref, api.EventTypeNormal, \"Reason\", \"finished\") <-done for caseName, item := range table { if e, a := item.attemptsWanted, item.attemptsMade; e != a { t.Errorf(\"case %v: wanted %v, got %v attempts\", caseName, e, a) } ev := &api.Event{} recordToSink(sink, ev, eventCorrelator, randGen) if attempts != ent.attemptsWanted { t.Errorf(\"case %v: wanted %d, got %d attempts\", caseName, ent.attemptsWanted, attempts) } } }", "commid": "kubernetes_pr_19368"}], "negative_passages": []} {"query_id": "q-en-kubernetes-e505295a05058bec1cba89d2e6a4b223262484cf137dc1e912f1c7462280605a", "query": "Currently scheduler latency are observed regardless of failure. For example, binding latency is like: This could be a potential confusion if latency spikes happen because of failure. We should better handle it like separating error case from normal\n/cc\nI completely agree - are you going to change this?\nYes. I will do it.", "positive_passages": [{"docid": "doc-en-kubernetes-1aa67e7da57a6926f5eb84cc30c032433baff7c08ec249903c7f9422a484f221", "text": "var BindingSaturationReportInterval = 1 * time.Second var ( E2eSchedulingLatency = prometheus.NewSummary( prometheus.SummaryOpts{ E2eSchedulingLatency = prometheus.NewHistogram( prometheus.HistogramOpts{ Subsystem: schedulerSubsystem, Name: \"e2e_scheduling_latency_microseconds\", Help: \"E2e scheduling latency (scheduling algorithm + binding)\", MaxAge: time.Hour, Buckets: prometheus.ExponentialBuckets(1000, 2, 15), }, ) SchedulingAlgorithmLatency = prometheus.NewSummary( prometheus.SummaryOpts{ SchedulingAlgorithmLatency = prometheus.NewHistogram( prometheus.HistogramOpts{ Subsystem: schedulerSubsystem, Name: \"scheduling_algorithm_latency_microseconds\", Help: \"Scheduling algorithm latency\", MaxAge: time.Hour, Buckets: prometheus.ExponentialBuckets(1000, 2, 15), }, ) BindingLatency = prometheus.NewSummary( prometheus.SummaryOpts{ BindingLatency = prometheus.NewHistogram( prometheus.HistogramOpts{ Subsystem: schedulerSubsystem, Name: \"binding_latency_microseconds\", Help: \"Binding latency\", MaxAge: time.Hour, Buckets: prometheus.ExponentialBuckets(1000, 2, 15), }, ) BindingRateLimiterSaturation = prometheus.NewGauge(", "commid": "kubernetes_pr_19263"}], "negative_passages": []} {"query_id": "q-en-kubernetes-e505295a05058bec1cba89d2e6a4b223262484cf137dc1e912f1c7462280605a", "query": "Currently scheduler latency are observed regardless of failure. For example, binding latency is like: This could be a potential confusion if latency spikes happen because of failure. We should better handle it like separating error case from normal\n/cc\nI completely agree - are you going to change this?\nYes. I will do it.", "positive_passages": [{"docid": "doc-en-kubernetes-bb1327adc32cb590cccc093a74a391d75fa1a2c72ee21374c94eb147af07655f", "text": "glog.V(3).Infof(\"Attempting to schedule: %+v\", pod) start := time.Now() defer func() { metrics.E2eSchedulingLatency.Observe(metrics.SinceInMicroseconds(start)) }() dest, err := s.config.Algorithm.Schedule(pod, s.config.NodeLister) metrics.SchedulingAlgorithmLatency.Observe(metrics.SinceInMicroseconds(start)) if err != nil { glog.V(1).Infof(\"Failed to schedule: %+v\", pod) s.config.Recorder.Eventf(pod, api.EventTypeWarning, \"FailedScheduling\", \"%v\", err) s.config.Error(pod, err) return } metrics.SchedulingAlgorithmLatency.Observe(metrics.SinceInMicroseconds(start)) b := &api.Binding{ ObjectMeta: api.ObjectMeta{Namespace: pod.Namespace, Name: pod.Name}, Target: api.ObjectReference{", "commid": "kubernetes_pr_19263"}], "negative_passages": []} {"query_id": "q-en-kubernetes-e505295a05058bec1cba89d2e6a4b223262484cf137dc1e912f1c7462280605a", "query": "Currently scheduler latency are observed regardless of failure. For example, binding latency is like: This could be a potential confusion if latency spikes happen because of failure. We should better handle it like separating error case from normal\n/cc\nI completely agree - are you going to change this?\nYes. I will do it.", "positive_passages": [{"docid": "doc-en-kubernetes-8f07e6057d1d00a5e65c8d782c645c5555ee43241a25b1b7b0b9ebe5a73042e5", "text": "s.config.Modeler.LockedAction(func() { bindingStart := time.Now() err := s.config.Binder.Bind(b) metrics.BindingLatency.Observe(metrics.SinceInMicroseconds(bindingStart)) if err != nil { glog.V(1).Infof(\"Failed to bind pod: %+v\", err) s.config.Recorder.Eventf(pod, api.EventTypeNormal, \"FailedScheduling\", \"Binding rejected: %v\", err) s.config.Error(pod, err) return } metrics.BindingLatency.Observe(metrics.SinceInMicroseconds(bindingStart)) s.config.Recorder.Eventf(pod, api.EventTypeNormal, \"Scheduled\", \"Successfully assigned %v to %v\", pod.Name, dest) // tell the model to assume that this binding took effect. assumed := *pod assumed.Spec.NodeName = dest s.config.Modeler.AssumePod(&assumed) }) metrics.E2eSchedulingLatency.Observe(metrics.SinceInMicroseconds(start)) }", "commid": "kubernetes_pr_19263"}], "negative_passages": []} {"query_id": "q-en-kubernetes-4030aa11852f25ec651e1fc5dd667b4297fe2300b704273273cd2ebda6e02fe8", "query": "I'll take a look at the failed builds and triage.\nI went through the recent reboot failures in kubernetes-e2e-gke-ci-reboot and here is a summary. [6481, 6486, 6488] Kubelet never came back from the reboot, and stopped posting node status. Throughout the test suite, the failed node stayed silent: It's hard to dig further due to the lack of clarity. Gathering log would at least help us rule out some possible causes . [6485] Heapster seems to crash loop quite frequently, and sometimes it failed the test (due to being not ready).. cc [6487] kube-proxy pod is running but not ready before the reboot test even started: This is not really a reboot test failure since the cluster was already in a bad state. We should make sure all the pods are ready after each test (e.g. \"should work after restarting kube-proxy\" ran before the failed reboot test). Another thing that would help is to increase the log level of the gke cluster. The log level of a gce cluster is 4, but I think gke is at the regular 2. Due to the low log level, it's unclear when kube-proxy started failing the readiness probe.\nFiled for (3).\nThe reboot tests basically haven't worked since 9pm last night: Interestingly, this lines up with the GKE equivalent for , but in that same window, has had no trouble. cc\nDigging into logs from the runs linked.\nAll of the gke-ci failures in are heapster crash-looping after a reboot.\nChanges to the heapster config in the last day: , . Nothing sticks out to me in those PRs.\nHeapster could be in crashloop if it cannot reach influxdb. But that is a bad design, and should fix it in today's heapster with image 18. I double checked them, it is image version 18.\nFrom , could be the num_nodes is unset, we give heapster no resource to run? If that is true, it could be point to the same version screw issue of our test infra.\nHeapster v0.18.4 will crashloop if InfluxDB isn't available. However we can update HEAD to v0.19.0, which will not crash if the backend goes down. I will send a PR to update InfluxDB version. , Dawn Chen wrote:\nGKE doesn't use InfluxDB, right?\nHeapster is the only kube-system pod that's not ready in the tests. If it relies on influxdb, shouldn't influxdb be not ready too?\nI like theory. My salt suggestion might've been horrible for unset ? I thought I did that correctly, but maybe my Python/Jinja was bad.\nYes, GKE doesn't use influxDB. My point is that if the backend storage is not ready, heapster is in crashloop. Vishnu made some changes to make heapster can run standalone, but looks like that fix is not included in heapster 18 based on his comment.\nOkay. I'm still trying to catch up. Why is heapster on GKE waiting on backend storage? And if is unset (e.g. we haven't finished running configure-), it looks like we just give the\nCatching others up: is seeing nodes that are actually hung with CPU pegged on reboot, so the heapster conversation may be a red herring.\nThere are plenty of tests that failed because kubelet didn't come back (and post status) after the reboot. There are a few that failed only because heapster was crash looping. They may be related or not...\nI spun up a GKE test cluster: and the heapster manifest looks fine, so I think I ruled out anything regarding\nI think is the winner. The reboot tests are a red herring.\nI'm actually just seeing heapster flap on a fresh cluster, no reboots.\nHeapster crashlooping is one problem. There are other cases, however, where kubelet just stops posting status all together. If heapster crashlooping can render kubelet unresponsive, that's a bug.\nis the serial console output from the node that got pegged (in ). I rebooted the node to see if it would repro, and it came back up fine. EDIT: the link to the jenkins run.\nI guess the serial console you posted is from build , the very recent one. I saw a similar one and the node never successfully come back. At this moment, I ignore those failures. The reboot failure caused by heapster crashloop is similar to what I observed 2 days ago for influxdb in gce cluster. The comment is at Can we retitle this one for (1)? Reboot test flakiness is caused by many reasons: version screw which is handled by test-infra, heapster and influxdb crashloop is handled by , kube-proxy not ready is handled by cluster team.\nOk. So we think we have the heapster flakes, versioning stuff, and kube-proxy stuff all taken care of elsewhere? I'm fine with targeting this issue at the \"node doesn't come back after reboot\" case. Is that what you were recommending?\nYes. 1) heapster in crashloop if backend is not ready is a known issue for a while, and fix is merged to heapster repositry for a while. We just need to upgrade our heapster after more testing (PR: ) 2) influxdb one requires more information, but we can handle them through 3) version screw is handled by test-infra through . already test/flake label for it 4) kube-proxy readiness issue is handled through\nwhere else should I be looking for hints on why that VM didn't come back to life until I kicked it? Or did I destroy any evidence we might have had?\nFun fact: I thought this might be related the timing of when I made the ContainerVM change on the GKE side, but I made that change for all releases 1.1.4. That picks up this test suite, too, because it's at , and it's green: So this is a divergence from master to 1.1, or between projects.\nThat suite does not run any reboot tests...\nDammit, I thought I scanned for that. Good call. We need to fix that.\nHere's the set of suites that run the reboot tests (thanks kubernetes-e2e-gce kubernetes-e2e-gce-reboot kubernetes-e2e-gke-ci kubernetes-e2e-gke-ci-reboot\nShouldn't we decided only X-reboot test suites run those reboot tests since they are disruptive?\nHm. builds 6478-6480 are all green and all after the (GKE-side) ContainerVM change, so we're looking for something else.\nThe container-vm change was merged at , which lines up with kubernetes-e2e-gke-ci-reboot 6481 (the beginning of the failures).\n6481 is the one with the new containervm change. It includes kubernetes-build/7822, which has the change.\n/ That's the OSS side change, which is irrelevant. GKE flipped at 19:45:55.\nAhhh. Fair enough.\nActually, and I just determined that the GKE-side change didn't make it out until 01:45:44, it looks like, so indeed nothing to do with ContainerVM.\nI took a more careful look at the serial output pasted, and this is interesting: We're actually in the middle of a clean shutdown, trying to unmount and failing. This same stacktrace repeats every 120s.\nI spun up an e2e cluster and with minimal testing have a node in the state of It's the same kernel hang on shutdown. ... and then I accidentally deleted it when I went to bisect, but ohwell. I'm sure I'll be able to get one back if I need.\nOkay, so I've confirmed that at the build level at , I can spin up a GKE cluster and reboot tests fail. I'm going to peel back to and see if they pass (because that matches ). ETA: For reference, I'm able to tickle this with just:\n7819 had a kernel hang on the second run as well, though slightly different signature (still aufs in the end):\nFor my next test, I'm peeling back to v1.1.4. We have no test coverage on reboots on the release branch of GKE right now, and I just want to guarantee this isn't some weird GCE hypervisor issue. And v1.1.4 is clean, at least for two runs worth.\nThis failed quickly on v1.1.5. Hrm. I'm wondering if my v1.1.4 test was a fluke, peeling back to try it there again. v1.1.5 only differs by ContainerVM (and go 1.4.3 vs 1.4.2), but all of our other timelines show that not being directly related. (Alternately, it's a lot easier on the newer kernel.) (Still no idea why the tests aren't triggered, but is.)\nwas very solid. I'm now trying on GKE test. This is a GKE unique thing, because after Friday morning at 1am, anything = will get deployed using the new ContainerVM. So I'm ret-conning a way to effectively try with the new ContainerVM, just to see if it's a contributing factor. It doesn't look like it should be. One of the other commits in the stream is a heapster version bump (from / ). If there's additional flapping or something caused by that (unique to GKE), it may be causing the hang. But that timeline doesn't match the line (relevant to , etc.)\nBad news bears: failed reboot tests. I'm having trouble rectifying this information with what we know about the timeline. But it does suggest there's a problem with ContainerVM at the very least, so there might be two problems here, so I'm rolling one back on GKE and we'll see what happens.\nI've manually confirmed that by reverting deployment of on GKE, I can run reboot tests at and a latest build. Using a special build of latest that will still trigger deployment of , reboot tests fail. Well, damn.\nGKE has been repushed and should, in theory, go back green soon. There's a lot I don't understand about this bug, and it's effectively going to stop us shipping and until we can figure it out.\nOkay, I think that reverting was not all that's necessary. We unfortunately haven't managed to collect a green on yet. That gives me some comfort, because it means that the timeline-discomfort I had previously was founded.\nHeh, I lied and right right writing that we collected our first one. Woo. I suspect will follow soon.\nHaven't we updated go to 1.5 on Friday as well? I don't know if we use the same version of go for GKE as we do in GKE, but in general we know it causes trouble. cc for Heapster thing.\nThe timeline there is off, too. The golang1.5.3 PR went in at , which is well after the trouble started.\nMorning status: Looks like we made a little greener, and didn't touch , so the patient had two viruses.\nOkay, this is my suspicion, but I can't quite piece together what's going on: For some reason, something in (? but that busted the Heapster config, so it's confusing me as to why), caused this. I'm basing this on: the timeline of the timeline of If you suppose that we injected a bug in build 7820 that caused about a 60-70% cointoss as to whether we regress (which is similar to the rate we're seeing now that has been rolled back on GKE), it's not unreasonable that was green for , and that's the only one you have to explain away on this theory. After that, it goes kind of red, and then at about 1am, comes in and wrecks completely as well. Pulling it back much further, to, say, build 7819, and you have to explain the entire range from 6477-6480 as green, which is a much lower probability. So my money is on build 7820. (7821 is revision-equivalent, so boring to talk about.)\nOh. It is Heapster. I spun up a 7819 cluster and a 7820 cluster, and the difference is that even though still had slightly busted Salt syntax, it was functional enough to create the replication controller, whereas on 7819, there's no Heapster replication controller at all. So I think the Heapster flapping is entirely the cause of the remaining issues, so we really need to get in ASAP.\nSo yesterday afternoon I was wondering why build 7820 triggered such a large regression when, say, the bump to heapster in didn't regress us at all. Build 7820 finally let the the fix in , which was actually important because GKE at HEAD wasn't deploying (until last night), it was deploying only , so the pillar wasn't configured. I'm still not exactly sure what's going on here, and I had intentionally left blank for a while during this investigation because it didn't seem like it should matter, but it's obvious that if 7820 is the culprit that it's interesting. I pushed a fix last night to GKE that adds on 1.2 clusters as well, and we've collected our first green dot. I don't know why is still flakey, though.\nEr, correction to my last comment. My change didn't make it out last night and won't be relevant until . The green is a fluke. Sad panda.\nPR introduced syntax error, PR fixed the error although it introduced another syntax issue which was finally fixed in . if you are testing anything make sure you are using version with merged. I agree with that the problem is not with Heapster accessing InfluxDB. Also the same problem takes places in autoscaling suite (described here ). We started observing such problems around Jan 19th. AFAIK once the container crashes kubelet invoke docker inspect on the failing container. How about invoking also docker logs then? It would be very useful for debugging purposes.\nYes, I understand that fixed the issue fully, but at build 7820 the cluster that's created actually spins up a cluster with a replication controller .yaml with resources. At build 7819, it doesn't. The syntax in is busted, but it's apparently not so busted. Previous to , in build 7819 (and builds previous), there was no controller .yaml at all, so there was a window where GKE wasn't spinning up Heapster at all. Hence my conclusion.\nThe syntax in is correct from Salt point of view, although the generated file has empty line in the begging and is not accepted by Api Server, so the RC is not created.\nThat's inconsistent with the fact that Heapster is starting on a 7820 (GKE) cluster.\nI have a new theory, which is that Heapster hasn't been running on GKE in 1.2 since , until .\nIt's not clear to me that upgrading heapster will solve this issue, since heapster doesn't rely on any backend on gke. Instead of asking kubelet to do that, it makes more sense for the test itself to get the logs through the apiserver. I think we should add that to the e2e test.\nI spun up a cluster from just prior to and this is the set of RCs I see: So basically, when went in, Heapster was enabled for the first time in GKE (master, not the release branches) in months and we started flipping out. This is a serious test coverage issue, obviously.\nI edited the above comment, but we would obviously catch this in the release branches because we have monitoring tied to Heapster related things. We need to be better here for CI.\nI made some investigation. The problem with crashlooping Heapster is due to: The issue was fixed in I'll create a patch release and bump the version.\nIt looks like is green. I'm hoping that , which is the first build past , will go green as well. (The cycle time on that build is waaaaay too long.)\nIt looks like we're good! is green now!\nfor node problem detector.", "positive_passages": [{"docid": "doc-en-kubernetes-ebc66d77e24ab237c98b3959ba34cfd7e3f3de49272b21894bcb582cbeb1e76d", "text": "apiVersion: v1 kind: ReplicationController metadata: name: heapster-v11 name: heapster-v12 namespace: kube-system labels: k8s-app: heapster version: v11 version: v12 kubernetes.io/cluster-service: \"true\" spec: replicas: 1 selector: k8s-app: heapster version: v11 version: v12 template: metadata: labels: k8s-app: heapster version: v11 version: v12 kubernetes.io/cluster-service: \"true\" spec: containers: - image: gcr.io/google_containers/heapster:v0.18.4 - image: gcr.io/google_containers/heapster:v0.18.5 name: heapster resources: # keep request = limit to keep this container in guaranteed class", "commid": "kubernetes_pr_20109"}], "negative_passages": []} {"query_id": "q-en-kubernetes-f10bead500960ea3dde1ad0ae7083423c7980e31d3345e62b31b741d2afc29a2", "query": "I noticed this flake by looking at the submit queue. The test is simple, but when it fails, we have no good way to debug it. Below is the test in question. When the test fails, we should print the lines we got in each case.", "positive_passages": [{"docid": "doc-en-kubernetes-ad312c9b8752a10351c04c7fff0d1a3dbb6c3a97d2acaedd78373c30ebe599b2", "text": "By(\"restricting to a time range\") time.Sleep(1500 * time.Millisecond) // ensure that startup logs on the node are seen as older than 1s out = runKubectlOrDie(\"log\", pod.Name, containerName, nsFlag, \"--since=1s\") recent := len(strings.Split(out, \"n\")) out = runKubectlOrDie(\"log\", pod.Name, containerName, nsFlag, \"--since=24h\") older := len(strings.Split(out, \"n\")) Expect(recent).To(BeNumerically(\"<\", older)) recent_out := runKubectlOrDie(\"log\", pod.Name, containerName, nsFlag, \"--since=1s\") recent := len(strings.Split(recent_out, \"n\")) older_out := runKubectlOrDie(\"log\", pod.Name, containerName, nsFlag, \"--since=24h\") older := len(strings.Split(older_out, \"n\")) Expect(recent).To(BeNumerically(\"<\", older), \"expected recent(%v) to be less than older(%v)nrecent lines:n%vnolder lines:n%vn\", recent, older, recent_out, older_out) }) }) })", "commid": "kubernetes_pr_20289"}], "negative_passages": []} {"query_id": "q-en-kubernetes-1b2bcc3aa22e5d5f53df9233e5f80a56f1f4bdaeb462b3a5016092de1417360c", "query": "hack/build-cross sets it without looking at the current value. And the value has to be an array, which is a bit awkward to use.\nThis could be improved indeed. Maybe something like and Haven't tested it though... But why would one want to change the value?\nI just wanted to build one, so I did the \"obvious\" thing: hack/dev-build-and-` It failed for multiple reasons :) , Lucas K\u00e4ldstr\u00f6m wrote:\nThis issue has been a major downer for me. It's much harder right now than it ought to be to articulate the few platforms for which you'd like to build each of server binaries, client binaries, test binaries, etc. Every time I've tried to do this \"the right way,\" I've spent hours looking at scripts and feeling frustrated that build options aren't documented better. Every time, I end up cutting my losses and resorting to temporary build script hacks instead, just to get the job done.\nDoesn't fix this issue exactly, but I've found that you can easily build only if you set .\nTaking a look at this today, I also thought that setting seems to be the most reasonable.\n/sig release", "positive_passages": [{"docid": "doc-en-kubernetes-d35fa71ee8824da4b40155b19d1ccd789d268fb5501f57faebeb2059c716bfd1", "text": "readonly KUBE_NODE_BINARIES=(\"${KUBE_NODE_TARGETS[@]##*/}\") readonly KUBE_NODE_BINARIES_WIN=(\"${KUBE_NODE_BINARIES[@]/%/.exe}\") if [[ \"${KUBE_FASTBUILD:-}\" == \"true\" ]]; then if [[ -n \"${KUBE_BUILD_PLATFORMS:-}\" ]]; then readonly KUBE_SERVER_PLATFORMS=(${KUBE_BUILD_PLATFORMS}) readonly KUBE_NODE_PLATFORMS=(${KUBE_BUILD_PLATFORMS}) readonly KUBE_TEST_PLATFORMS=(${KUBE_BUILD_PLATFORMS}) readonly KUBE_CLIENT_PLATFORMS=(${KUBE_BUILD_PLATFORMS}) elif [[ \"${KUBE_FASTBUILD:-}\" == \"true\" ]]; then readonly KUBE_SERVER_PLATFORMS=(linux/amd64) readonly KUBE_NODE_PLATFORMS=(linux/amd64) if [[ \"${KUBE_BUILDER_OS:-}\" == \"darwin\"* ]]; then", "commid": "kubernetes_pr_46862"}], "negative_passages": []} {"query_id": "q-en-kubernetes-124a8f04c015bf597a839b5fc96d254a778164bb355fa6ee63ad65e704649190", "query": "The delete command for a replicaset hangs forever. This then manifests as phantom RSs when I delete and recreate a namespace.\n- delete is the gift that keeps on giving\ncc\nHad you already deleted the corresponding Deployment?\nI am still seeing this problem. Deployments and replica sets are sticking.\nFrom what I can tell, calling DELETE_COLLECTION on Deployments is returning: And since the controller assumes not found errors are equivalent to already deleted, it thinks the content is deleted. When I manually inspect, I still see the deployment and replica set laying around... //cc\nholding on this while i debug some more...\nThis was a false alarm, apologies.", "positive_passages": [{"docid": "doc-en-kubernetes-d7360ea6d6c919fd207a3604048922d88ad5448942a504e7efea9b7df4d6267d", "text": "pendingActionSet.Insert( strings.Join([]string{\"delete-collection\", \"daemonsets\", \"\"}, \"-\"), strings.Join([]string{\"delete-collection\", \"deployments\", \"\"}, \"-\"), strings.Join([]string{\"delete-collection\", \"replicasets\", \"\"}, \"-\"), strings.Join([]string{\"delete-collection\", \"jobs\", \"\"}, \"-\"), strings.Join([]string{\"delete-collection\", \"horizontalpodautoscalers\", \"\"}, \"-\"), strings.Join([]string{\"delete-collection\", \"ingresses\", \"\"}, \"-\"),", "commid": "kubernetes_pr_21055"}], "negative_passages": []} {"query_id": "q-en-kubernetes-124a8f04c015bf597a839b5fc96d254a778164bb355fa6ee63ad65e704649190", "query": "The delete command for a replicaset hangs forever. This then manifests as phantom RSs when I delete and recreate a namespace.\n- delete is the gift that keeps on giving\ncc\nHad you already deleted the corresponding Deployment?\nI am still seeing this problem. Deployments and replica sets are sticking.\nFrom what I can tell, calling DELETE_COLLECTION on Deployments is returning: And since the controller assumes not found errors are equivalent to already deleted, it thinks the content is deleted. When I manually inspect, I still see the deployment and replica set laying around... //cc\nholding on this while i debug some more...\nThis was a false alarm, apologies.", "positive_passages": [{"docid": "doc-en-kubernetes-7b0dd7f5dbd61c679b18815a61960cfd66d03ff942834df5cc612747b6780028", "text": "mockClient := fake.NewSimpleClientset(testInput.testNamespace) if containsVersion(versions, \"extensions/v1beta1\") { resources := []unversioned.APIResource{} for _, resource := range []string{\"daemonsets\", \"deployments\", \"jobs\", \"horizontalpodautoscalers\", \"ingresses\"} { for _, resource := range []string{\"daemonsets\", \"deployments\", \"replicasets\", \"jobs\", \"horizontalpodautoscalers\", \"ingresses\"} { resources = append(resources, unversioned.APIResource{Name: resource}) } mockClient.Resources = map[string]*unversioned.APIResourceList{", "commid": "kubernetes_pr_21055"}], "negative_passages": []} {"query_id": "q-en-kubernetes-124a8f04c015bf597a839b5fc96d254a778164bb355fa6ee63ad65e704649190", "query": "The delete command for a replicaset hangs forever. This then manifests as phantom RSs when I delete and recreate a namespace.\n- delete is the gift that keeps on giving\ncc\nHad you already deleted the corresponding Deployment?\nI am still seeing this problem. Deployments and replica sets are sticking.\nFrom what I can tell, calling DELETE_COLLECTION on Deployments is returning: And since the controller assumes not found errors are equivalent to already deleted, it thinks the content is deleted. When I manually inspect, I still see the deployment and replica set laying around... //cc\nholding on this while i debug some more...\nThis was a false alarm, apologies.", "positive_passages": [{"docid": "doc-en-kubernetes-20886b1a54b6a6fea2936c5a404a98df3b86a18a193dca840028f706711462b5", "text": "return estimate, err } } if containsResource(resources, \"replicasets\") { err = deleteReplicaSets(kubeClient.Extensions(), namespace) if err != nil { return estimate, err } } } return estimate, nil }", "commid": "kubernetes_pr_21055"}], "negative_passages": []} {"query_id": "q-en-kubernetes-124a8f04c015bf597a839b5fc96d254a778164bb355fa6ee63ad65e704649190", "query": "The delete command for a replicaset hangs forever. This then manifests as phantom RSs when I delete and recreate a namespace.\n- delete is the gift that keeps on giving\ncc\nHad you already deleted the corresponding Deployment?\nI am still seeing this problem. Deployments and replica sets are sticking.\nFrom what I can tell, calling DELETE_COLLECTION on Deployments is returning: And since the controller assumes not found errors are equivalent to already deleted, it thinks the content is deleted. When I manually inspect, I still see the deployment and replica set laying around... //cc\nholding on this while i debug some more...\nThis was a false alarm, apologies.", "positive_passages": [{"docid": "doc-en-kubernetes-d558f7e679a7a08fbb9fc9335effb80ce7a7605162c83bf10c9e5e7c2ae3f585", "text": "return expClient.Deployments(ns).DeleteCollection(nil, api.ListOptions{}) } func deleteReplicaSets(expClient extensions_unversioned.ExtensionsInterface, ns string) error { return expClient.ReplicaSets(ns).DeleteCollection(nil, api.ListOptions{}) } func deleteIngress(expClient extensions_unversioned.ExtensionsInterface, ns string) error { return expClient.Ingresses(ns).DeleteCollection(nil, api.ListOptions{}) }", "commid": "kubernetes_pr_21055"}], "negative_passages": []} {"query_id": "q-en-kubernetes-56abf1c19aeb772bfdf5543a20f43dcbe7bbdea78cd050790b056a5e979d3a30", "query": "http://pr- is another failure.\nAlso seen in http://pr-\nAgain. Not assigned to anyone, so I am sending to for routing\nAgain: Is there anyone working on this? I will spend some time looking into this.\nI think that the name of this issue is misleading - the issue seems to be unrelated to etcd_watcher Basically, if you look into this part of log: So it seems that, the WaitGroup that is problematic is in httptest.Server. So my feeling is that we are simply talking to a server that is already being close and this is ~roughly duplicate of We should just find where exactly the problem is and try to work-around it.\nI think this is exactly what we had before. I will send a PR with the fix today.", "positive_passages": [{"docid": "doc-en-kubernetes-fa29dbaf1f68f1094b74edc90b9e4c7d724f5a9da1b782e0631544c5a07fdd02", "text": "func TestClient(t *testing.T) { _, s := framework.RunAMaster(t) defer s.Close() // TODO: Uncomment when fix #19254 // defer s.Close() ns := api.NamespaceDefault framework.DeleteAllEtcdKeys()", "commid": "kubernetes_pr_21259"}], "negative_passages": []} {"query_id": "q-en-kubernetes-56abf1c19aeb772bfdf5543a20f43dcbe7bbdea78cd050790b056a5e979d3a30", "query": "http://pr- is another failure.\nAlso seen in http://pr-\nAgain. Not assigned to anyone, so I am sending to for routing\nAgain: Is there anyone working on this? I will spend some time looking into this.\nI think that the name of this issue is misleading - the issue seems to be unrelated to etcd_watcher Basically, if you look into this part of log: So it seems that, the WaitGroup that is problematic is in httptest.Server. So my feeling is that we are simply talking to a server that is already being close and this is ~roughly duplicate of We should just find where exactly the problem is and try to work-around it.\nI think this is exactly what we had before. I will send a PR with the fix today.", "positive_passages": [{"docid": "doc-en-kubernetes-f4d028b5b1309560e99b7143f8a068140491d08730d8e43ee670a7a791366fce", "text": "func TestSingleWatch(t *testing.T) { _, s := framework.RunAMaster(t) defer s.Close() // TODO: Uncomment when fix #19254 // defer s.Close() ns := \"blargh\" deleteAllEtcdKeys()", "commid": "kubernetes_pr_21259"}], "negative_passages": []} {"query_id": "q-en-kubernetes-56abf1c19aeb772bfdf5543a20f43dcbe7bbdea78cd050790b056a5e979d3a30", "query": "http://pr- is another failure.\nAlso seen in http://pr-\nAgain. Not assigned to anyone, so I am sending to for routing\nAgain: Is there anyone working on this? I will spend some time looking into this.\nI think that the name of this issue is misleading - the issue seems to be unrelated to etcd_watcher Basically, if you look into this part of log: So it seems that, the WaitGroup that is problematic is in httptest.Server. So my feeling is that we are simply talking to a server that is already being close and this is ~roughly duplicate of We should just find where exactly the problem is and try to work-around it.\nI think this is exactly what we had before. I will send a PR with the fix today.", "positive_passages": [{"docid": "doc-en-kubernetes-44d505dfad11f25e8977d0627b021585e0d05a9abfe242ed22c7db76f758817c", "text": "framework.DeleteAllEtcdKeys() defer framework.DeleteAllEtcdKeys() _, s := framework.RunAMaster(t) defer s.Close() // TODO: Uncomment when fix #19254 // defer s.Close() ns := api.NamespaceDefault client := client.NewOrDie(&client.Config{Host: s.URL, ContentConfig: client.ContentConfig{GroupVersion: testapi.Default.GroupVersion()}})", "commid": "kubernetes_pr_21259"}], "negative_passages": []} {"query_id": "q-en-kubernetes-56abf1c19aeb772bfdf5543a20f43dcbe7bbdea78cd050790b056a5e979d3a30", "query": "http://pr- is another failure.\nAlso seen in http://pr-\nAgain. Not assigned to anyone, so I am sending to for routing\nAgain: Is there anyone working on this? I will spend some time looking into this.\nI think that the name of this issue is misleading - the issue seems to be unrelated to etcd_watcher Basically, if you look into this part of log: So it seems that, the WaitGroup that is problematic is in httptest.Server. So my feeling is that we are simply talking to a server that is already being close and this is ~roughly duplicate of We should just find where exactly the problem is and try to work-around it.\nI think this is exactly what we had before. I will send a PR with the fix today.", "positive_passages": [{"docid": "doc-en-kubernetes-e4c3d6fa881908190861d750462cda2a30321e914c2da87cb87e21c85f2b0d43", "text": "func TestExperimentalPrefix(t *testing.T) { _, s := framework.RunAMaster(t) defer s.Close() // TODO: Uncomment when fix #19254 // defer s.Close() resp, err := http.Get(s.URL + \"/apis/extensions/\") if err != nil {", "commid": "kubernetes_pr_21259"}], "negative_passages": []} {"query_id": "q-en-kubernetes-56abf1c19aeb772bfdf5543a20f43dcbe7bbdea78cd050790b056a5e979d3a30", "query": "http://pr- is another failure.\nAlso seen in http://pr-\nAgain. Not assigned to anyone, so I am sending to for routing\nAgain: Is there anyone working on this? I will spend some time looking into this.\nI think that the name of this issue is misleading - the issue seems to be unrelated to etcd_watcher Basically, if you look into this part of log: So it seems that, the WaitGroup that is problematic is in httptest.Server. So my feeling is that we are simply talking to a server that is already being close and this is ~roughly duplicate of We should just find where exactly the problem is and try to work-around it.\nI think this is exactly what we had before. I will send a PR with the fix today.", "positive_passages": [{"docid": "doc-en-kubernetes-c5cfacfc94666286392440ca5c99f749f87451cebfe9173a090c5d7ac3ad1fb6", "text": "func TestWatchSucceedsWithoutArgs(t *testing.T) { _, s := framework.RunAMaster(t) defer s.Close() // TODO: Uncomment when fix #19254 // defer s.Close() resp, err := http.Get(s.URL + \"/api/v1/namespaces?watch=1\") if err != nil {", "commid": "kubernetes_pr_21259"}], "negative_passages": []} {"query_id": "q-en-kubernetes-56abf1c19aeb772bfdf5543a20f43dcbe7bbdea78cd050790b056a5e979d3a30", "query": "http://pr- is another failure.\nAlso seen in http://pr-\nAgain. Not assigned to anyone, so I am sending to for routing\nAgain: Is there anyone working on this? I will spend some time looking into this.\nI think that the name of this issue is misleading - the issue seems to be unrelated to etcd_watcher Basically, if you look into this part of log: So it seems that, the WaitGroup that is problematic is in httptest.Server. So my feeling is that we are simply talking to a server that is already being close and this is ~roughly duplicate of We should just find where exactly the problem is and try to work-around it.\nI think this is exactly what we had before. I will send a PR with the fix today.", "positive_passages": [{"docid": "doc-en-kubernetes-30af27f79b5fa46dada9eeef3c474ec72be9930047e15433de9c7b88e483ccc3", "text": "func TestAccept(t *testing.T) { _, s := framework.RunAMaster(t) defer s.Close() // TODO: Uncomment when fix #19254 // defer s.Close() resp, err := http.Get(s.URL + \"/api/\") if err != nil {", "commid": "kubernetes_pr_21259"}], "negative_passages": []} {"query_id": "q-en-kubernetes-56abf1c19aeb772bfdf5543a20f43dcbe7bbdea78cd050790b056a5e979d3a30", "query": "http://pr- is another failure.\nAlso seen in http://pr-\nAgain. Not assigned to anyone, so I am sending to for routing\nAgain: Is there anyone working on this? I will spend some time looking into this.\nI think that the name of this issue is misleading - the issue seems to be unrelated to etcd_watcher Basically, if you look into this part of log: So it seems that, the WaitGroup that is problematic is in httptest.Server. So my feeling is that we are simply talking to a server that is already being close and this is ~roughly duplicate of We should just find where exactly the problem is and try to work-around it.\nI think this is exactly what we had before. I will send a PR with the fix today.", "positive_passages": [{"docid": "doc-en-kubernetes-a072704031fb7f1d2c2a9b10f14f471ebfa5954d666f119cb7e47b20395ccfe5", "text": "func TestMasterProcessMetrics(t *testing.T) { _, s := framework.RunAMaster(t) defer s.Close() // TODO: Uncomment when fix #19254 // defer s.Close() metrics, err := scrapeMetrics(s) if err != nil {", "commid": "kubernetes_pr_21259"}], "negative_passages": []} {"query_id": "q-en-kubernetes-56abf1c19aeb772bfdf5543a20f43dcbe7bbdea78cd050790b056a5e979d3a30", "query": "http://pr- is another failure.\nAlso seen in http://pr-\nAgain. Not assigned to anyone, so I am sending to for routing\nAgain: Is there anyone working on this? I will spend some time looking into this.\nI think that the name of this issue is misleading - the issue seems to be unrelated to etcd_watcher Basically, if you look into this part of log: So it seems that, the WaitGroup that is problematic is in httptest.Server. So my feeling is that we are simply talking to a server that is already being close and this is ~roughly duplicate of We should just find where exactly the problem is and try to work-around it.\nI think this is exactly what we had before. I will send a PR with the fix today.", "positive_passages": [{"docid": "doc-en-kubernetes-0d675fe0f2ce234d3bb0767d7e74694adbb0ec9b1cd4dd072842ed25e8ee9952", "text": "func TestApiserverMetrics(t *testing.T) { _, s := framework.RunAMaster(t) defer s.Close() // TODO: Uncomment when fix #19254 // defer s.Close() // Make a request to the apiserver to ensure there's at least one data point // for the metrics we're expecting -- otherwise, they won't be exported.", "commid": "kubernetes_pr_21259"}], "negative_passages": []} {"query_id": "q-en-kubernetes-56abf1c19aeb772bfdf5543a20f43dcbe7bbdea78cd050790b056a5e979d3a30", "query": "http://pr- is another failure.\nAlso seen in http://pr-\nAgain. Not assigned to anyone, so I am sending to for routing\nAgain: Is there anyone working on this? I will spend some time looking into this.\nI think that the name of this issue is misleading - the issue seems to be unrelated to etcd_watcher Basically, if you look into this part of log: So it seems that, the WaitGroup that is problematic is in httptest.Server. So my feeling is that we are simply talking to a server that is already being close and this is ~roughly duplicate of We should just find where exactly the problem is and try to work-around it.\nI think this is exactly what we had before. I will send a PR with the fix today.", "positive_passages": [{"docid": "doc-en-kubernetes-47cea94ac5e65434685c617e73d9b7f1c5452ec6468319488d1e653ea664cfff", "text": "func TestPersistentVolumeRecycler(t *testing.T) { _, s := framework.RunAMaster(t) defer s.Close() // TODO: Uncomment when fix #19254 // defer s.Close() deleteAllEtcdKeys() // Use higher QPS and Burst, there is a test for race condition below, which", "commid": "kubernetes_pr_21259"}], "negative_passages": []} {"query_id": "q-en-kubernetes-7ebf7acbde7789cdda053450cb89ee471a2ab525583ef16f7638edd03a598813", "query": "I'm following step by step tutorial on how to create resource quotas per namespace () looks like this: { \"apiVersion\": \"v1\", \"kind\": \"ResourceQuota\", \"metadata\": { \"name\": \"quota\", }, \"spec\": { \"hard\": { \"memory\": \"1Gi\", \"cpu\": \"20\", \"pods\": \"10\", \"services\": \"5\", \"replicationcontrollers\":\"20\", \"resourcequotas\":\"1\", }, } when i'm about to create quota it fails with: kubectl create -f --namespace=prod invalid character '}' looking for beginning of object key string Namespace \"prod\" exists: kubectl get namespace NAME LABELS STATUS AGE default <noneActive 26d dev name=development Active 8d kube-system <noneActive 26d prod name=production Active 8d kubectl version Client Version: version.Info{Major:\"1\", Minor:\"1\", GitVersion:\"v1.1.2\", GitCommit:\"\", GitTreeState:\"clean\"} Server Version: version.Info{Major:\"1\", Minor:\"1\", GitVersion:\"v1.1.2\", GitCommit:\"\", GitTreeState:\"clean\"}\nThose trailing commas are wrong. should look like: I'll submit a docs patch\nsweet. ta. That seem to do the trick: kubectl create -f --namespace=prod resourcequota \"quota\" created\nI'm going to reopen until the PR which fixes this merges, when it should autoclose...\nI'm following this tutorial on installing percona with kubectl, and I'm getting a similar error: When I run: kubectl create -f pmm-server- I get: error: json: line 1: invalid character \u2018\u00c2\u2019 looking for beginning of object key string", "positive_passages": [{"docid": "doc-en-kubernetes-708ab6b13e7b4be2147026b890fa9e148594788b5838ece69d729c71f50b4a45", "text": "\"apiVersion\": \"v1\", \"kind\": \"ResourceQuota\", \"metadata\": { \"name\": \"quota\", \"name\": \"quota\" }, \"spec\": { \"hard\": {", "commid": "kubernetes_pr_21388"}], "negative_passages": []} {"query_id": "q-en-kubernetes-7ebf7acbde7789cdda053450cb89ee471a2ab525583ef16f7638edd03a598813", "query": "I'm following step by step tutorial on how to create resource quotas per namespace () looks like this: { \"apiVersion\": \"v1\", \"kind\": \"ResourceQuota\", \"metadata\": { \"name\": \"quota\", }, \"spec\": { \"hard\": { \"memory\": \"1Gi\", \"cpu\": \"20\", \"pods\": \"10\", \"services\": \"5\", \"replicationcontrollers\":\"20\", \"resourcequotas\":\"1\", }, } when i'm about to create quota it fails with: kubectl create -f --namespace=prod invalid character '}' looking for beginning of object key string Namespace \"prod\" exists: kubectl get namespace NAME LABELS STATUS AGE default <noneActive 26d dev name=development Active 8d kube-system <noneActive 26d prod name=production Active 8d kubectl version Client Version: version.Info{Major:\"1\", Minor:\"1\", GitVersion:\"v1.1.2\", GitCommit:\"\", GitTreeState:\"clean\"} Server Version: version.Info{Major:\"1\", Minor:\"1\", GitVersion:\"v1.1.2\", GitCommit:\"\", GitTreeState:\"clean\"}\nThose trailing commas are wrong. should look like: I'll submit a docs patch\nsweet. ta. That seem to do the trick: kubectl create -f --namespace=prod resourcequota \"quota\" created\nI'm going to reopen until the PR which fixes this merges, when it should autoclose...\nI'm following this tutorial on installing percona with kubectl, and I'm getting a similar error: When I run: kubectl create -f pmm-server- I get: error: json: line 1: invalid character \u2018\u00c2\u2019 looking for beginning of object key string", "positive_passages": [{"docid": "doc-en-kubernetes-869d1eb1156893b288bffb3f7fbcc8a5f8ec07d69880b865fbe21e8891c23e31", "text": "\"pods\": \"10\", \"services\": \"5\", \"replicationcontrollers\":\"20\", \"resourcequotas\":\"1\", }, \"resourcequotas\":\"1\" } } } EOF", "commid": "kubernetes_pr_21388"}], "negative_passages": []} {"query_id": "q-en-kubernetes-8281098c4a3749f944133097833d93b74dab4d64a54e55b1ef6d2ac82651b4ee", "query": "The says this about the new GUI: It does not what? Seems important!\nHmm... That's interesting. Will fix.", "positive_passages": [{"docid": "doc-en-kubernetes-29265b8887cef3d94bd88b793941777bce7d10c31b014ef95aa5cda24f76744b", "text": "trigger scaling up and down the number of pods in your application. * New GUI (dashboard) allows you to get started quickly and enables the same functionality found in the CLI as a more approachable and discoverable way of interacting with the system. Note: the GUI is eanbled by default for new cluster creation, however, it does not interacting with the system. Note: the GUI is enabled by default in 1.2 clusters. \"XXX \"Dashboard ## Other notable improvements", "commid": "kubernetes_pr_23324"}], "negative_passages": []} {"query_id": "q-en-kubernetes-eaf1a4d119ad0fb53a39512c42d412af412ec0302afd730bf3a1c25d36f547fb", "query": "When creating a new cluster on AWS with the 1.2 version I always get a failure when checking docker on the nodes, but after the check loop ends everything seems to work fine and all nodes are marked as ready. The instance types for nodes is\nThat is very surprising. Are you using any particular settings (to help me try to reproduce the problem)?\nThese are the exports I make before running kube-up:\nThe problem is the instances take a very long time to start up. I had to increase the timeout to 19.\nHey how did you increase the timeout? Didn't see a setting in config- that I could override. Getting the same issue as the poster.\nThanks , that did the trick!", "positive_passages": [{"docid": "doc-en-kubernetes-55141c12fad505112d65cce8fce967b4d367785b296f55f4f89760a8986281c7", "text": "local output=`check-minion ${minion_ip}` echo $output if [[ \"${output}\" != \"working\" ]]; then if (( attempt > 9 )); then if (( attempt > 20 )); then echo echo -e \"${color_red}Your cluster is unlikely to work correctly.\" >&2 echo \"Please run ./cluster/kube-down.sh and re-create the\" >&2", "commid": "kubernetes_pr_25405"}], "negative_passages": []} {"query_id": "q-en-kubernetes-ff81b881a335ec393925d5c77ad19a1d9dde82cef6e119d6f7df449f9fa6f37d", "query": "I have been having the kube-controller-manager (running in a container) crash quite frequently with this error: I cant fathom what seems to cause it. It seems to happen consistently for a while (CrashLoopBackoff occurs) and then fixes itself after an undetermined amount of time. Kubectl version: Docker version: Networking: Calico (with the calico k8s policy agent) Please do let me know if there is any other information that I can supply. I appreciate I haven't given much to go off of. I am not sure what the steps to reproduce are...\ncc\nIt looks like a bug. I'll take a look. Thanks for reporting it.\nWeirdly, the description showed up in the email github sent me but it appears absent on the github issue?\nThe description was there when I read it but disappeared now.\nWell thats very odd! If you have it on your email, could you post it on here please (and I will update the description)? I unfortunately haven't got the error message to hand now...\nI took the liberty of just editing the OP.\nthanks a lot! Not sure what happened there!\ncould you paste the returned output at \"server-ip:port/api\" and \"server-ip:port/apis\"?\nSure.\nThanks \"\" has an empty preferred version and that's causing the error you reported. Is \"\" created via the thirdparty resource api? ()\naha! Good spot! Yes, it was created via that API. This is the Yaml file I used to create it: Here is the output of\nThanks for the confirmation Your configuration is correct. The preferred version is left empty by the thirdparty resource controller, while our discovery client has an assumption that the preferred version won't be empty. I'll patch the discovery client to tolerate an empty preferred version, and will discuss with on if we should make a default preferred version for a thirdparty resource.\nThanks a lot Great job on working it out :)\nSent to fix.\nWhat would be a workaround for this before the release, how can one specify the preferredVersion?\nAFAIK, there is no workaround. There is no way to specify the preferredVersion for the thirdPartyResource.\nI have a quick workaround to fix this : 1) Change the kubernetes to reflect the following : from : thirdpartyresources=true to : thirdpartyresources=false 2) Then restart 'kubelet' service OR reboot the K8S Master server systemctl restart kubelet 3) Revert back the changes done to kube- from : thirdpartyresources=false to : thirdpartyresources=true 4) Verify whether the above change on the above manifest file has been automatically picked up by the kubernetes API ps aux | grep -i thirdpartyresources 5) Finally you may restart kubelet service systemctl restart kubelet 6) Now verify whether the 'kube-controller-manager' is in a 'Running' state kubectl get pods --all-namespaces | grep -i controller-manager Hope this fix works for you guys as well !!", "positive_passages": [{"docid": "doc-en-kubernetes-bd257930c8ce2489746dd8e23105bcd86cba557753c9c2c2fe19297750ee378d", "text": "// ServerResourcesForGroupVersion returns the supported resources for a group and version. func (d *DiscoveryClient) ServerResourcesForGroupVersion(groupVersion string) (resources *unversioned.APIResourceList, err error) { url := url.URL{} if groupVersion == \"v1\" { if len(groupVersion) == 0 { return nil, fmt.Errorf(\"groupVersion shouldn't be empty\") } else if groupVersion == \"v1\" { url.Path = \"/api/\" + groupVersion } else { url.Path = \"/apis/\" + groupVersion", "commid": "kubernetes_pr_23985"}], "negative_passages": []} {"query_id": "q-en-kubernetes-ff81b881a335ec393925d5c77ad19a1d9dde82cef6e119d6f7df449f9fa6f37d", "query": "I have been having the kube-controller-manager (running in a container) crash quite frequently with this error: I cant fathom what seems to cause it. It seems to happen consistently for a while (CrashLoopBackoff occurs) and then fixes itself after an undetermined amount of time. Kubectl version: Docker version: Networking: Calico (with the calico k8s policy agent) Please do let me know if there is any other information that I can supply. I appreciate I haven't given much to go off of. I am not sure what the steps to reproduce are...\ncc\nIt looks like a bug. I'll take a look. Thanks for reporting it.\nWeirdly, the description showed up in the email github sent me but it appears absent on the github issue?\nThe description was there when I read it but disappeared now.\nWell thats very odd! If you have it on your email, could you post it on here please (and I will update the description)? I unfortunately haven't got the error message to hand now...\nI took the liberty of just editing the OP.\nthanks a lot! Not sure what happened there!\ncould you paste the returned output at \"server-ip:port/api\" and \"server-ip:port/apis\"?\nSure.\nThanks \"\" has an empty preferred version and that's causing the error you reported. Is \"\" created via the thirdparty resource api? ()\naha! Good spot! Yes, it was created via that API. This is the Yaml file I used to create it: Here is the output of\nThanks for the confirmation Your configuration is correct. The preferred version is left empty by the thirdparty resource controller, while our discovery client has an assumption that the preferred version won't be empty. I'll patch the discovery client to tolerate an empty preferred version, and will discuss with on if we should make a default preferred version for a thirdparty resource.\nThanks a lot Great job on working it out :)\nSent to fix.\nWhat would be a workaround for this before the release, how can one specify the preferredVersion?\nAFAIK, there is no workaround. There is no way to specify the preferredVersion for the thirdPartyResource.\nI have a quick workaround to fix this : 1) Change the kubernetes to reflect the following : from : thirdpartyresources=true to : thirdpartyresources=false 2) Then restart 'kubelet' service OR reboot the K8S Master server systemctl restart kubelet 3) Revert back the changes done to kube- from : thirdpartyresources=false to : thirdpartyresources=true 4) Verify whether the above change on the above manifest file has been automatically picked up by the kubernetes API ps aux | grep -i thirdpartyresources 5) Finally you may restart kubelet service systemctl restart kubelet 6) Now verify whether the 'kube-controller-manager' is in a 'Running' state kubectl get pods --all-namespaces | grep -i controller-manager Hope this fix works for you guys as well !!", "positive_passages": [{"docid": "doc-en-kubernetes-a7a9ebbf68a63530274871dc065bcb1affd06fc90f6193644a7640863bf7c661", "text": "\"k8s.io/kubernetes/pkg/client/typed/discovery\" \"k8s.io/kubernetes/pkg/client/typed/dynamic\" \"k8s.io/kubernetes/pkg/runtime\" utilerrors \"k8s.io/kubernetes/pkg/util/errors\" \"k8s.io/kubernetes/pkg/util/sets\" \"github.com/golang/glog\"", "commid": "kubernetes_pr_23985"}], "negative_passages": []} {"query_id": "q-en-kubernetes-ff81b881a335ec393925d5c77ad19a1d9dde82cef6e119d6f7df449f9fa6f37d", "query": "I have been having the kube-controller-manager (running in a container) crash quite frequently with this error: I cant fathom what seems to cause it. It seems to happen consistently for a while (CrashLoopBackoff occurs) and then fixes itself after an undetermined amount of time. Kubectl version: Docker version: Networking: Calico (with the calico k8s policy agent) Please do let me know if there is any other information that I can supply. I appreciate I haven't given much to go off of. I am not sure what the steps to reproduce are...\ncc\nIt looks like a bug. I'll take a look. Thanks for reporting it.\nWeirdly, the description showed up in the email github sent me but it appears absent on the github issue?\nThe description was there when I read it but disappeared now.\nWell thats very odd! If you have it on your email, could you post it on here please (and I will update the description)? I unfortunately haven't got the error message to hand now...\nI took the liberty of just editing the OP.\nthanks a lot! Not sure what happened there!\ncould you paste the returned output at \"server-ip:port/api\" and \"server-ip:port/apis\"?\nSure.\nThanks \"\" has an empty preferred version and that's causing the error you reported. Is \"\" created via the thirdparty resource api? ()\naha! Good spot! Yes, it was created via that API. This is the Yaml file I used to create it: Here is the output of\nThanks for the confirmation Your configuration is correct. The preferred version is left empty by the thirdparty resource controller, while our discovery client has an assumption that the preferred version won't be empty. I'll patch the discovery client to tolerate an empty preferred version, and will discuss with on if we should make a default preferred version for a thirdparty resource.\nThanks a lot Great job on working it out :)\nSent to fix.\nWhat would be a workaround for this before the release, how can one specify the preferredVersion?\nAFAIK, there is no workaround. There is no way to specify the preferredVersion for the thirdPartyResource.\nI have a quick workaround to fix this : 1) Change the kubernetes to reflect the following : from : thirdpartyresources=true to : thirdpartyresources=false 2) Then restart 'kubelet' service OR reboot the K8S Master server systemctl restart kubelet 3) Revert back the changes done to kube- from : thirdpartyresources=false to : thirdpartyresources=true 4) Verify whether the above change on the above manifest file has been automatically picked up by the kubernetes API ps aux | grep -i thirdpartyresources 5) Finally you may restart kubelet service systemctl restart kubelet 6) Now verify whether the 'kube-controller-manager' is in a 'Running' state kubectl get pods --all-namespaces | grep -i controller-manager Hope this fix works for you guys as well !!", "positive_passages": [{"docid": "doc-en-kubernetes-c5cc1a2ea43566482c53f5c70c517dbdeab000bfcbae9adfd76be20699245532", "text": "if err != nil { return results, err } allErrs := []error{} for _, apiGroup := range serverGroupList.Groups { preferredVersion := apiGroup.PreferredVersion apiResourceList, err := discoveryClient.ServerResourcesForGroupVersion(preferredVersion.GroupVersion) if err != nil { return results, err allErrs = append(allErrs, err) continue } groupVersion := unversioned.GroupVersion{Group: apiGroup.Name, Version: preferredVersion.Version} for _, apiResource := range apiResourceList.APIResources {", "commid": "kubernetes_pr_23985"}], "negative_passages": []} {"query_id": "q-en-kubernetes-ff81b881a335ec393925d5c77ad19a1d9dde82cef6e119d6f7df449f9fa6f37d", "query": "I have been having the kube-controller-manager (running in a container) crash quite frequently with this error: I cant fathom what seems to cause it. It seems to happen consistently for a while (CrashLoopBackoff occurs) and then fixes itself after an undetermined amount of time. Kubectl version: Docker version: Networking: Calico (with the calico k8s policy agent) Please do let me know if there is any other information that I can supply. I appreciate I haven't given much to go off of. I am not sure what the steps to reproduce are...\ncc\nIt looks like a bug. I'll take a look. Thanks for reporting it.\nWeirdly, the description showed up in the email github sent me but it appears absent on the github issue?\nThe description was there when I read it but disappeared now.\nWell thats very odd! If you have it on your email, could you post it on here please (and I will update the description)? I unfortunately haven't got the error message to hand now...\nI took the liberty of just editing the OP.\nthanks a lot! Not sure what happened there!\ncould you paste the returned output at \"server-ip:port/api\" and \"server-ip:port/apis\"?\nSure.\nThanks \"\" has an empty preferred version and that's causing the error you reported. Is \"\" created via the thirdparty resource api? ()\naha! Good spot! Yes, it was created via that API. This is the Yaml file I used to create it: Here is the output of\nThanks for the confirmation Your configuration is correct. The preferred version is left empty by the thirdparty resource controller, while our discovery client has an assumption that the preferred version won't be empty. I'll patch the discovery client to tolerate an empty preferred version, and will discuss with on if we should make a default preferred version for a thirdparty resource.\nThanks a lot Great job on working it out :)\nSent to fix.\nWhat would be a workaround for this before the release, how can one specify the preferredVersion?\nAFAIK, there is no workaround. There is no way to specify the preferredVersion for the thirdPartyResource.\nI have a quick workaround to fix this : 1) Change the kubernetes to reflect the following : from : thirdpartyresources=true to : thirdpartyresources=false 2) Then restart 'kubelet' service OR reboot the K8S Master server systemctl restart kubelet 3) Revert back the changes done to kube- from : thirdpartyresources=false to : thirdpartyresources=true 4) Verify whether the above change on the above manifest file has been automatically picked up by the kubernetes API ps aux | grep -i thirdpartyresources 5) Finally you may restart kubelet service systemctl restart kubelet 6) Now verify whether the 'kube-controller-manager' is in a 'Running' state kubectl get pods --all-namespaces | grep -i controller-manager Hope this fix works for you guys as well !!", "positive_passages": [{"docid": "doc-en-kubernetes-2ec66b9ccd444984accc7065b122700d44eea6a74184d8617985ff3549462d69", "text": "results = append(results, groupVersion.WithResource(apiResource.Name)) } } return results, nil return results, utilerrors.NewAggregate(allErrs) }", "commid": "kubernetes_pr_23985"}], "negative_passages": []} {"query_id": "q-en-kubernetes-7377f65a40865118420e0f4c4b965cad4b25ab1082e4c9d411d42ac66506315f", "query": "Minor improvement in UX maybe?\nWe have semi-ambiguous syntax because 'get pod/foo' is also valid. We should have define syntax for this early on. On Apr 8, 2016 7:13 PM, \"Prashanth B\" wrote:\n+1 for the UX improvement.\nget pod/foo != get pod ns/foo ?\n.... LOL\nThis would make SO much easier to use :+1:\nI dont disagree, but it is somewhat complicated to transition now. Maybe something like 'kind/ns:name' which would be less ambiguous and strictly compatible (ns:name is not a valid name, nor is it confusing wrt kind/name). Need to define precedence of this with --namespace flag and in-file namespace yaml. I also don't see it as a huge win, personally. It would have been nice from day1 but relatively minor now, IMO. But this is a decision for UX team - if someone sent a patch implementing something like it, or at least a proposal that worked through all the details, it would be a better starting place. On Apr 10, 2016 12:33 PM, \"Lucas K\u00e4ldstr\u00f6m\" wrote:\nThis is more like P0 to me. Somewhere else I complained about having to type too much and the kind/name syntax. My vote if for kind/ns:name, too.\ncc\nWhat is the scenario you're trying to simplify? You often need to use resources from multiple namespaces on the same command line?\ncc\nns:name is not a valid name for namespaces and our core resources, but it can be a valid name for extension resources in the Kube ecosystem, like users, roles, authorizations. We'd have to add more slashes\nAttaching the namespace to the \"name\" part won't work well because then you have to chose whether is \"list all resources in namespace token\" or \"get the resource named token in the current namespace\". To allow both cases, you must attach the information as a discretely tokenized part or as part of the \"resource\" part. I think that parses unambiguously since resource can't have dots groups can't have colons and namespaces can't have slashes.\nI can assign short names to namespaces and end up with a small kubectl get line, --namespace= just feels unnecessary. I guess it would feel more natural if I could also do . This isn't a huge deal for me personally because I just bash alias my way out of typing.\nIn my case, I work with multiple namespaces, not on the same command line (I use --all-namespaces frequently, too). I guess I could create bash aliases, but I also tend to reinstall the machines I run kubectl from on a regular basis.\n--all-namespaces is so long anyway that it makes me cringe. , Rudi C wrote:\nWhat api group syntax did we settle on? I don't see it documented anywhere. cc\nWith Ubernetes, we'll also have multi-cluster scenarios coming.\nSyntax is: * `-n`: Namespace scope * `-l`: Label selector * also used for `--labels` in `expose`, but should be deprecated * `-L`: Label columns", "commid": "kubernetes_pr_30630"}], "negative_passages": []} {"query_id": "q-en-kubernetes-7377f65a40865118420e0f4c4b965cad4b25ab1082e4c9d411d42ac66506315f", "query": "Minor improvement in UX maybe?\nWe have semi-ambiguous syntax because 'get pod/foo' is also valid. We should have define syntax for this early on. On Apr 8, 2016 7:13 PM, \"Prashanth B\" wrote:\n+1 for the UX improvement.\nget pod/foo != get pod ns/foo ?\n.... LOL\nThis would make SO much easier to use :+1:\nI dont disagree, but it is somewhat complicated to transition now. Maybe something like 'kind/ns:name' which would be less ambiguous and strictly compatible (ns:name is not a valid name, nor is it confusing wrt kind/name). Need to define precedence of this with --namespace flag and in-file namespace yaml. I also don't see it as a huge win, personally. It would have been nice from day1 but relatively minor now, IMO. But this is a decision for UX team - if someone sent a patch implementing something like it, or at least a proposal that worked through all the details, it would be a better starting place. On Apr 10, 2016 12:33 PM, \"Lucas K\u00e4ldstr\u00f6m\" wrote:\nThis is more like P0 to me. Somewhere else I complained about having to type too much and the kind/name syntax. My vote if for kind/ns:name, too.\ncc\nWhat is the scenario you're trying to simplify? You often need to use resources from multiple namespaces on the same command line?\ncc\nns:name is not a valid name for namespaces and our core resources, but it can be a valid name for extension resources in the Kube ecosystem, like users, roles, authorizations. We'd have to add more slashes\nAttaching the namespace to the \"name\" part won't work well because then you have to chose whether is \"list all resources in namespace token\" or \"get the resource named token in the current namespace\". To allow both cases, you must attach the information as a discretely tokenized part or as part of the \"resource\" part. I think that parses unambiguously since resource can't have dots groups can't have colons and namespaces can't have slashes.\nI can assign short names to namespaces and end up with a small kubectl get line, --namespace= just feels unnecessary. I guess it would feel more natural if I could also do . This isn't a huge deal for me personally because I just bash alias my way out of typing.\nIn my case, I work with multiple namespaces, not on the same command line (I use --all-namespaces frequently, too). I guess I could create bash aliases, but I also tend to reinstall the machines I run kubectl from on a regular basis.\n--all-namespaces is so long anyway that it makes me cringe. , Rudi C wrote:\nWhat api group syntax did we settle on? I don't see it documented anywhere. cc\nWith Ubernetes, we'll also have multi-cluster scenarios coming.\nSyntax is: # Post-condition: verify shorthand `-n other` has the same results as `--namespace=other` kube::test::get_object_assert 'pods -n other' \"{{range.items}}{{$id_field}}:{{end}}\" 'valid-pod:' ### Delete POD valid-pod in specific namespace # Pre-condition: valid-pod POD exists", "commid": "kubernetes_pr_30630"}], "negative_passages": []} {"query_id": "q-en-kubernetes-7377f65a40865118420e0f4c4b965cad4b25ab1082e4c9d411d42ac66506315f", "query": "Minor improvement in UX maybe?\nWe have semi-ambiguous syntax because 'get pod/foo' is also valid. We should have define syntax for this early on. On Apr 8, 2016 7:13 PM, \"Prashanth B\" wrote:\n+1 for the UX improvement.\nget pod/foo != get pod ns/foo ?\n.... LOL\nThis would make SO much easier to use :+1:\nI dont disagree, but it is somewhat complicated to transition now. Maybe something like 'kind/ns:name' which would be less ambiguous and strictly compatible (ns:name is not a valid name, nor is it confusing wrt kind/name). Need to define precedence of this with --namespace flag and in-file namespace yaml. I also don't see it as a huge win, personally. It would have been nice from day1 but relatively minor now, IMO. But this is a decision for UX team - if someone sent a patch implementing something like it, or at least a proposal that worked through all the details, it would be a better starting place. On Apr 10, 2016 12:33 PM, \"Lucas K\u00e4ldstr\u00f6m\" wrote:\nThis is more like P0 to me. Somewhere else I complained about having to type too much and the kind/name syntax. My vote if for kind/ns:name, too.\ncc\nWhat is the scenario you're trying to simplify? You often need to use resources from multiple namespaces on the same command line?\ncc\nns:name is not a valid name for namespaces and our core resources, but it can be a valid name for extension resources in the Kube ecosystem, like users, roles, authorizations. We'd have to add more slashes\nAttaching the namespace to the \"name\" part won't work well because then you have to chose whether is \"list all resources in namespace token\" or \"get the resource named token in the current namespace\". To allow both cases, you must attach the information as a discretely tokenized part or as part of the \"resource\" part. I think that parses unambiguously since resource can't have dots groups can't have colons and namespaces can't have slashes.\nI can assign short names to namespaces and end up with a small kubectl get line, --namespace= just feels unnecessary. I guess it would feel more natural if I could also do . This isn't a huge deal for me personally because I just bash alias my way out of typing.\nIn my case, I work with multiple namespaces, not on the same command line (I use --all-namespaces frequently, too). I guess I could create bash aliases, but I also tend to reinstall the machines I run kubectl from on a regular basis.\n--all-namespaces is so long anyway that it makes me cringe. , Rudi C wrote:\nWhat api group syntax did we settle on? I don't see it documented anywhere. cc\nWith Ubernetes, we'll also have multi-cluster scenarios coming.\nSyntax is: Namespace: FlagInfo{prefix + FlagNamespace, \"\", \"\", \"If present, the namespace scope for this CLI request\"}, Namespace: FlagInfo{prefix + FlagNamespace, \"n\", \"\", \"If present, the namespace scope for this CLI request\"}, } }", "commid": "kubernetes_pr_30630"}], "negative_passages": []} {"query_id": "q-en-kubernetes-c7a72df11498e7300fcb7761274972d0c9f40da0bd1ab208df9d0e2fd8c6758c", "query": "I'd like to be able to sort by a timestamp, e.g. but that fails with an exception: I appreciate that the root of the problem is probably outside of kubernetes, yet I hope you'd agree that sorting by a timestamp is very useful functionality.\nAgree with sorting by timestamp.. Just make it clear, is of type: while only these types are allowed to be sorted:", "positive_passages": [{"docid": "doc-en-kubernetes-38a9426b08ffadf2279099038a2e5af7946b7df1054e32f3a362b1769c1d1fbf", "text": "kube::test::get_object_assert pods \"{{range.items}}{{$id_field}}:{{end}}\" '' # Command kubectl get pods --sort-by=\"{metadata.name}\" kubectl get pods --sort-by=\"{metadata.creationTimestamp}\" ############################ # Kubectl --all-namespaces #", "commid": "kubernetes_pr_25022"}], "negative_passages": []} {"query_id": "q-en-kubernetes-c7a72df11498e7300fcb7761274972d0c9f40da0bd1ab208df9d0e2fd8c6758c", "query": "I'd like to be able to sort by a timestamp, e.g. but that fails with an exception: I appreciate that the root of the problem is probably outside of kubernetes, yet I hope you'd agree that sorting by a timestamp is very useful functionality.\nAgree with sorting by timestamp.. Just make it clear, is of type: while only these types are allowed to be sorted:", "positive_passages": [{"docid": "doc-en-kubernetes-aca8342ee1c5e44a8f739e20985c76ca873ac501f954372c53d32afd83853bcc", "text": "\"sort\" \"k8s.io/kubernetes/pkg/api/meta\" \"k8s.io/kubernetes/pkg/api/unversioned\" \"k8s.io/kubernetes/pkg/api/v1\" \"k8s.io/kubernetes/pkg/runtime\" \"k8s.io/kubernetes/pkg/util/integer\" \"k8s.io/kubernetes/pkg/util/jsonpath\" \"github.com/golang/glog\"", "commid": "kubernetes_pr_25022"}], "negative_passages": []} {"query_id": "q-en-kubernetes-c7a72df11498e7300fcb7761274972d0c9f40da0bd1ab208df9d0e2fd8c6758c", "query": "I'd like to be able to sort by a timestamp, e.g. but that fails with an exception: I appreciate that the root of the problem is probably outside of kubernetes, yet I hope you'd agree that sorting by a timestamp is very useful functionality.\nAgree with sorting by timestamp.. Just make it clear, is of type: while only these types are allowed to be sorted:", "positive_passages": [{"docid": "doc-en-kubernetes-7e1b9fdf7c7136990101bfea47a8d7b5bbca77e41bcf8ceedd8c2cc8faa0065c", "text": "return i.String() < j.String(), nil case reflect.Ptr: return isLess(i.Elem(), j.Elem()) case reflect.Struct: // special case handling lessFuncList := []structLessFunc{timeLess} if ok, less := structLess(i, j, lessFuncList); ok { return less, nil } // fallback to the fields comparision for idx := 0; idx < i.NumField(); idx++ { less, err := isLess(i.Field(idx), j.Field(idx)) if err != nil || !less { return less, err } } return true, nil case reflect.Array, reflect.Slice: // note: the length of i and j may be different for idx := 0; idx < integer.IntMin(i.Len(), j.Len()); idx++ { less, err := isLess(i.Index(idx), j.Index(idx)) if err != nil || !less { return less, err } } return true, nil default: return false, fmt.Errorf(\"unsortable type: %v\", i.Kind()) } } // structLessFunc checks whether i and j could be compared(the first return value), // and if it could, return whether i is less than j(the second return value) type structLessFunc func(i, j reflect.Value) (bool, bool) // structLess returns whether i and j could be compared with the given function list func structLess(i, j reflect.Value, lessFuncList []structLessFunc) (bool, bool) { for _, lessFunc := range lessFuncList { if ok, less := lessFunc(i, j); ok { return ok, less } } return false, false } // compare two unversioned.Time values. func timeLess(i, j reflect.Value) (bool, bool) { if i.Type() != reflect.TypeOf(unversioned.Unix(0, 0)) { return false, false } return true, i.MethodByName(\"Before\").Call([]reflect.Value{j})[0].Bool() } func (r *RuntimeSort) Less(i, j int) bool { iObj := r.objs[i] jObj := r.objs[j]", "commid": "kubernetes_pr_25022"}], "negative_passages": []} {"query_id": "q-en-kubernetes-c7a72df11498e7300fcb7761274972d0c9f40da0bd1ab208df9d0e2fd8c6758c", "query": "I'd like to be able to sort by a timestamp, e.g. but that fails with an exception: I appreciate that the root of the problem is probably outside of kubernetes, yet I hope you'd agree that sorting by a timestamp is very useful functionality.\nAgree with sorting by timestamp.. Just make it clear, is of type: while only these types are allowed to be sorted:", "positive_passages": [{"docid": "doc-en-kubernetes-0049f1d4d9f09ba884ed929e8105d3f80339d3d25434589bdec3084d7ae698f3", "text": "\"testing\" internal \"k8s.io/kubernetes/pkg/api\" \"k8s.io/kubernetes/pkg/api/unversioned\" api \"k8s.io/kubernetes/pkg/api/v1\" \"k8s.io/kubernetes/pkg/runtime\" )", "commid": "kubernetes_pr_25022"}], "negative_passages": []} {"query_id": "q-en-kubernetes-c7a72df11498e7300fcb7761274972d0c9f40da0bd1ab208df9d0e2fd8c6758c", "query": "I'd like to be able to sort by a timestamp, e.g. but that fails with an exception: I appreciate that the root of the problem is probably outside of kubernetes, yet I hope you'd agree that sorting by a timestamp is very useful functionality.\nAgree with sorting by timestamp.. Just make it clear, is of type: while only these types are allowed to be sorted:", "positive_passages": [{"docid": "doc-en-kubernetes-a7b4e88da2e9cd1cccc0828cca6147047ef34b1b671cd6c1574c9ab06659cd1a", "text": "field: \"{.metadata.name}\", }, { name: \"random-order-timestamp\", obj: &api.PodList{ Items: []api.Pod{ { ObjectMeta: api.ObjectMeta{ CreationTimestamp: unversioned.Unix(300, 0), }, }, { ObjectMeta: api.ObjectMeta{ CreationTimestamp: unversioned.Unix(100, 0), }, }, { ObjectMeta: api.ObjectMeta{ CreationTimestamp: unversioned.Unix(200, 0), }, }, }, }, sort: &api.PodList{ Items: []api.Pod{ { ObjectMeta: api.ObjectMeta{ CreationTimestamp: unversioned.Unix(100, 0), }, }, { ObjectMeta: api.ObjectMeta{ CreationTimestamp: unversioned.Unix(200, 0), }, }, { ObjectMeta: api.ObjectMeta{ CreationTimestamp: unversioned.Unix(300, 0), }, }, }, }, field: \"{.metadata.creationTimestamp}\", }, { name: \"random-order-numbers\", obj: &api.ReplicationControllerList{ Items: []api.ReplicationController{", "commid": "kubernetes_pr_25022"}], "negative_passages": []} {"query_id": "q-en-kubernetes-21b704a68514d2b400f1a4f479b12f48c6dc4bd890b9d4b14cffa5ee4c13d7d4", "query": "During\nany chance this is related to pass: https://k8s- fail: https://k8s- https://k8s-\nThis is because for some reason the submit queue rebuild didn't catch the flake before merging. Protobuf landed first, merging a new api object like petset should have failed.\nQuick fix is to rebuild the proto marshalers and push.\nAlso, hack/verify-generated- is broken, which probably is the root cause.\nLooking at it now\nThree problems: -generated-protobuf wasn't updated when we made update-generated-protobuf use docker, so it didn't fail out script was verifying the old API groups, not any of the new API groups that have been merged without protobuf serializations (although pkg/api/serialization_test should have failed, possibly the job didn't build with the latest master?), which caused the tests to fail\nSomething about the merge bot is broken - it merged without all the tests being green: . That's why this broke -\nSpawned since there is something wrong with update-generated-protobuf - will merge that to fix the failing test and then debug the generator in the morning.\nThe bot can race with pushes between when it starts testing and when it merges...", "positive_passages": [{"docid": "doc-en-kubernetes-480c578a70c385c4a31aa3b2f03cea53c276fbcba22e13088f675d78dfeae4f3", "text": " /* Copyright 2016 The Kubernetes Authors All rights reserved. Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ // Code generated by protoc-gen-gogo. // source: k8s.io/kubernetes/pkg/apis/apps/v1alpha1/generated.proto // DO NOT EDIT! /* Package v1alpha1 is a generated protocol buffer package. It is generated from these files: k8s.io/kubernetes/pkg/apis/apps/v1alpha1/generated.proto It has these top-level messages: PetSet PetSetList PetSetSpec PetSetStatus */ package v1alpha1 import proto \"github.com/gogo/protobuf/proto\" import fmt \"fmt\" import math \"math\" import _ \"github.com/gogo/protobuf/gogoproto\" import _ \"k8s.io/kubernetes/pkg/api/resource\" import k8s_io_kubernetes_pkg_api_unversioned \"k8s.io/kubernetes/pkg/api/unversioned\" import k8s_io_kubernetes_pkg_api_v1 \"k8s.io/kubernetes/pkg/api/v1\" import _ \"k8s.io/kubernetes/pkg/util/intstr\" import io \"io\" // Reference imports to suppress errors if they are not otherwise used. var _ = proto.Marshal var _ = fmt.Errorf var _ = math.Inf func (m *PetSet) Reset() { *m = PetSet{} } func (m *PetSet) String() string { return proto.CompactTextString(m) } func (*PetSet) ProtoMessage() {} func (m *PetSetList) Reset() { *m = PetSetList{} } func (m *PetSetList) String() string { return proto.CompactTextString(m) } func (*PetSetList) ProtoMessage() {} func (m *PetSetSpec) Reset() { *m = PetSetSpec{} } func (m *PetSetSpec) String() string { return proto.CompactTextString(m) } func (*PetSetSpec) ProtoMessage() {} func (m *PetSetStatus) Reset() { *m = PetSetStatus{} } func (m *PetSetStatus) String() string { return proto.CompactTextString(m) } func (*PetSetStatus) ProtoMessage() {} func init() { proto.RegisterType((*PetSet)(nil), \"k8s.io.kubernetes.pkg.apis.apps.v1alpha1.PetSet\") proto.RegisterType((*PetSetList)(nil), \"k8s.io.kubernetes.pkg.apis.apps.v1alpha1.PetSetList\") proto.RegisterType((*PetSetSpec)(nil), \"k8s.io.kubernetes.pkg.apis.apps.v1alpha1.PetSetSpec\") proto.RegisterType((*PetSetStatus)(nil), \"k8s.io.kubernetes.pkg.apis.apps.v1alpha1.PetSetStatus\") } func (m *PetSet) Marshal() (data []byte, err error) { size := m.Size() data = make([]byte, size) n, err := m.MarshalTo(data) if err != nil { return nil, err } return data[:n], nil } func (m *PetSet) MarshalTo(data []byte) (int, error) { var i int _ = i var l int _ = l data[i] = 0xa i++ i = encodeVarintGenerated(data, i, uint64(m.ObjectMeta.Size())) n1, err := m.ObjectMeta.MarshalTo(data[i:]) if err != nil { return 0, err } i += n1 data[i] = 0x12 i++ i = encodeVarintGenerated(data, i, uint64(m.Spec.Size())) n2, err := m.Spec.MarshalTo(data[i:]) if err != nil { return 0, err } i += n2 data[i] = 0x1a i++ i = encodeVarintGenerated(data, i, uint64(m.Status.Size())) n3, err := m.Status.MarshalTo(data[i:]) if err != nil { return 0, err } i += n3 return i, nil } func (m *PetSetList) Marshal() (data []byte, err error) { size := m.Size() data = make([]byte, size) n, err := m.MarshalTo(data) if err != nil { return nil, err } return data[:n], nil } func (m *PetSetList) MarshalTo(data []byte) (int, error) { var i int _ = i var l int _ = l data[i] = 0xa i++ i = encodeVarintGenerated(data, i, uint64(m.ListMeta.Size())) n4, err := m.ListMeta.MarshalTo(data[i:]) if err != nil { return 0, err } i += n4 if len(m.Items) > 0 { for _, msg := range m.Items { data[i] = 0x12 i++ i = encodeVarintGenerated(data, i, uint64(msg.Size())) n, err := msg.MarshalTo(data[i:]) if err != nil { return 0, err } i += n } } return i, nil } func (m *PetSetSpec) Marshal() (data []byte, err error) { size := m.Size() data = make([]byte, size) n, err := m.MarshalTo(data) if err != nil { return nil, err } return data[:n], nil } func (m *PetSetSpec) MarshalTo(data []byte) (int, error) { var i int _ = i var l int _ = l if m.Replicas != nil { data[i] = 0x8 i++ i = encodeVarintGenerated(data, i, uint64(*m.Replicas)) } if m.Selector != nil { data[i] = 0x12 i++ i = encodeVarintGenerated(data, i, uint64(m.Selector.Size())) n5, err := m.Selector.MarshalTo(data[i:]) if err != nil { return 0, err } i += n5 } data[i] = 0x1a i++ i = encodeVarintGenerated(data, i, uint64(m.Template.Size())) n6, err := m.Template.MarshalTo(data[i:]) if err != nil { return 0, err } i += n6 if len(m.VolumeClaimTemplates) > 0 { for _, msg := range m.VolumeClaimTemplates { data[i] = 0x22 i++ i = encodeVarintGenerated(data, i, uint64(msg.Size())) n, err := msg.MarshalTo(data[i:]) if err != nil { return 0, err } i += n } } data[i] = 0x2a i++ i = encodeVarintGenerated(data, i, uint64(len(m.ServiceName))) i += copy(data[i:], m.ServiceName) return i, nil } func (m *PetSetStatus) Marshal() (data []byte, err error) { size := m.Size() data = make([]byte, size) n, err := m.MarshalTo(data) if err != nil { return nil, err } return data[:n], nil } func (m *PetSetStatus) MarshalTo(data []byte) (int, error) { var i int _ = i var l int _ = l if m.ObservedGeneration != nil { data[i] = 0x8 i++ i = encodeVarintGenerated(data, i, uint64(*m.ObservedGeneration)) } data[i] = 0x10 i++ i = encodeVarintGenerated(data, i, uint64(m.Replicas)) return i, nil } func encodeFixed64Generated(data []byte, offset int, v uint64) int { data[offset] = uint8(v) data[offset+1] = uint8(v >> 8) data[offset+2] = uint8(v >> 16) data[offset+3] = uint8(v >> 24) data[offset+4] = uint8(v >> 32) data[offset+5] = uint8(v >> 40) data[offset+6] = uint8(v >> 48) data[offset+7] = uint8(v >> 56) return offset + 8 } func encodeFixed32Generated(data []byte, offset int, v uint32) int { data[offset] = uint8(v) data[offset+1] = uint8(v >> 8) data[offset+2] = uint8(v >> 16) data[offset+3] = uint8(v >> 24) return offset + 4 } func encodeVarintGenerated(data []byte, offset int, v uint64) int { for v >= 1<<7 { data[offset] = uint8(v&0x7f | 0x80) v >>= 7 offset++ } data[offset] = uint8(v) return offset + 1 } func (m *PetSet) Size() (n int) { var l int _ = l l = m.ObjectMeta.Size() n += 1 + l + sovGenerated(uint64(l)) l = m.Spec.Size() n += 1 + l + sovGenerated(uint64(l)) l = m.Status.Size() n += 1 + l + sovGenerated(uint64(l)) return n } func (m *PetSetList) Size() (n int) { var l int _ = l l = m.ListMeta.Size() n += 1 + l + sovGenerated(uint64(l)) if len(m.Items) > 0 { for _, e := range m.Items { l = e.Size() n += 1 + l + sovGenerated(uint64(l)) } } return n } func (m *PetSetSpec) Size() (n int) { var l int _ = l if m.Replicas != nil { n += 1 + sovGenerated(uint64(*m.Replicas)) } if m.Selector != nil { l = m.Selector.Size() n += 1 + l + sovGenerated(uint64(l)) } l = m.Template.Size() n += 1 + l + sovGenerated(uint64(l)) if len(m.VolumeClaimTemplates) > 0 { for _, e := range m.VolumeClaimTemplates { l = e.Size() n += 1 + l + sovGenerated(uint64(l)) } } l = len(m.ServiceName) n += 1 + l + sovGenerated(uint64(l)) return n } func (m *PetSetStatus) Size() (n int) { var l int _ = l if m.ObservedGeneration != nil { n += 1 + sovGenerated(uint64(*m.ObservedGeneration)) } n += 1 + sovGenerated(uint64(m.Replicas)) return n } func sovGenerated(x uint64) (n int) { for { n++ x >>= 7 if x == 0 { break } } return n } func sozGenerated(x uint64) (n int) { return sovGenerated(uint64((x << 1) ^ uint64((int64(x) >> 63)))) } func (m *PetSet) Unmarshal(data []byte) error { l := len(data) iNdEx := 0 for iNdEx < l { preIndex := iNdEx var wire uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowGenerated } if iNdEx >= l { return io.ErrUnexpectedEOF } b := data[iNdEx] iNdEx++ wire |= (uint64(b) & 0x7F) << shift if b < 0x80 { break } } fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { return fmt.Errorf(\"proto: PetSet: wiretype end group for non-group\") } if fieldNum <= 0 { return fmt.Errorf(\"proto: PetSet: illegal tag %d (wire type %d)\", fieldNum, wire) } switch fieldNum { case 1: if wireType != 2 { return fmt.Errorf(\"proto: wrong wireType = %d for field ObjectMeta\", wireType) } var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowGenerated } if iNdEx >= l { return io.ErrUnexpectedEOF } b := data[iNdEx] iNdEx++ msglen |= (int(b) & 0x7F) << shift if b < 0x80 { break } } if msglen < 0 { return ErrInvalidLengthGenerated } postIndex := iNdEx + msglen if postIndex > l { return io.ErrUnexpectedEOF } if err := m.ObjectMeta.Unmarshal(data[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 2: if wireType != 2 { return fmt.Errorf(\"proto: wrong wireType = %d for field Spec\", wireType) } var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowGenerated } if iNdEx >= l { return io.ErrUnexpectedEOF } b := data[iNdEx] iNdEx++ msglen |= (int(b) & 0x7F) << shift if b < 0x80 { break } } if msglen < 0 { return ErrInvalidLengthGenerated } postIndex := iNdEx + msglen if postIndex > l { return io.ErrUnexpectedEOF } if err := m.Spec.Unmarshal(data[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 3: if wireType != 2 { return fmt.Errorf(\"proto: wrong wireType = %d for field Status\", wireType) } var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowGenerated } if iNdEx >= l { return io.ErrUnexpectedEOF } b := data[iNdEx] iNdEx++ msglen |= (int(b) & 0x7F) << shift if b < 0x80 { break } } if msglen < 0 { return ErrInvalidLengthGenerated } postIndex := iNdEx + msglen if postIndex > l { return io.ErrUnexpectedEOF } if err := m.Status.Unmarshal(data[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex default: iNdEx = preIndex skippy, err := skipGenerated(data[iNdEx:]) if err != nil { return err } if skippy < 0 { return ErrInvalidLengthGenerated } if (iNdEx + skippy) > l { return io.ErrUnexpectedEOF } iNdEx += skippy } } if iNdEx > l { return io.ErrUnexpectedEOF } return nil } func (m *PetSetList) Unmarshal(data []byte) error { l := len(data) iNdEx := 0 for iNdEx < l { preIndex := iNdEx var wire uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowGenerated } if iNdEx >= l { return io.ErrUnexpectedEOF } b := data[iNdEx] iNdEx++ wire |= (uint64(b) & 0x7F) << shift if b < 0x80 { break } } fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { return fmt.Errorf(\"proto: PetSetList: wiretype end group for non-group\") } if fieldNum <= 0 { return fmt.Errorf(\"proto: PetSetList: illegal tag %d (wire type %d)\", fieldNum, wire) } switch fieldNum { case 1: if wireType != 2 { return fmt.Errorf(\"proto: wrong wireType = %d for field ListMeta\", wireType) } var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowGenerated } if iNdEx >= l { return io.ErrUnexpectedEOF } b := data[iNdEx] iNdEx++ msglen |= (int(b) & 0x7F) << shift if b < 0x80 { break } } if msglen < 0 { return ErrInvalidLengthGenerated } postIndex := iNdEx + msglen if postIndex > l { return io.ErrUnexpectedEOF } if err := m.ListMeta.Unmarshal(data[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 2: if wireType != 2 { return fmt.Errorf(\"proto: wrong wireType = %d for field Items\", wireType) } var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowGenerated } if iNdEx >= l { return io.ErrUnexpectedEOF } b := data[iNdEx] iNdEx++ msglen |= (int(b) & 0x7F) << shift if b < 0x80 { break } } if msglen < 0 { return ErrInvalidLengthGenerated } postIndex := iNdEx + msglen if postIndex > l { return io.ErrUnexpectedEOF } m.Items = append(m.Items, PetSet{}) if err := m.Items[len(m.Items)-1].Unmarshal(data[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex default: iNdEx = preIndex skippy, err := skipGenerated(data[iNdEx:]) if err != nil { return err } if skippy < 0 { return ErrInvalidLengthGenerated } if (iNdEx + skippy) > l { return io.ErrUnexpectedEOF } iNdEx += skippy } } if iNdEx > l { return io.ErrUnexpectedEOF } return nil } func (m *PetSetSpec) Unmarshal(data []byte) error { l := len(data) iNdEx := 0 for iNdEx < l { preIndex := iNdEx var wire uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowGenerated } if iNdEx >= l { return io.ErrUnexpectedEOF } b := data[iNdEx] iNdEx++ wire |= (uint64(b) & 0x7F) << shift if b < 0x80 { break } } fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { return fmt.Errorf(\"proto: PetSetSpec: wiretype end group for non-group\") } if fieldNum <= 0 { return fmt.Errorf(\"proto: PetSetSpec: illegal tag %d (wire type %d)\", fieldNum, wire) } switch fieldNum { case 1: if wireType != 0 { return fmt.Errorf(\"proto: wrong wireType = %d for field Replicas\", wireType) } var v int32 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowGenerated } if iNdEx >= l { return io.ErrUnexpectedEOF } b := data[iNdEx] iNdEx++ v |= (int32(b) & 0x7F) << shift if b < 0x80 { break } } m.Replicas = &v case 2: if wireType != 2 { return fmt.Errorf(\"proto: wrong wireType = %d for field Selector\", wireType) } var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowGenerated } if iNdEx >= l { return io.ErrUnexpectedEOF } b := data[iNdEx] iNdEx++ msglen |= (int(b) & 0x7F) << shift if b < 0x80 { break } } if msglen < 0 { return ErrInvalidLengthGenerated } postIndex := iNdEx + msglen if postIndex > l { return io.ErrUnexpectedEOF } if m.Selector == nil { m.Selector = &k8s_io_kubernetes_pkg_api_unversioned.LabelSelector{} } if err := m.Selector.Unmarshal(data[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 3: if wireType != 2 { return fmt.Errorf(\"proto: wrong wireType = %d for field Template\", wireType) } var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowGenerated } if iNdEx >= l { return io.ErrUnexpectedEOF } b := data[iNdEx] iNdEx++ msglen |= (int(b) & 0x7F) << shift if b < 0x80 { break } } if msglen < 0 { return ErrInvalidLengthGenerated } postIndex := iNdEx + msglen if postIndex > l { return io.ErrUnexpectedEOF } if err := m.Template.Unmarshal(data[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 4: if wireType != 2 { return fmt.Errorf(\"proto: wrong wireType = %d for field VolumeClaimTemplates\", wireType) } var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowGenerated } if iNdEx >= l { return io.ErrUnexpectedEOF } b := data[iNdEx] iNdEx++ msglen |= (int(b) & 0x7F) << shift if b < 0x80 { break } } if msglen < 0 { return ErrInvalidLengthGenerated } postIndex := iNdEx + msglen if postIndex > l { return io.ErrUnexpectedEOF } m.VolumeClaimTemplates = append(m.VolumeClaimTemplates, k8s_io_kubernetes_pkg_api_v1.PersistentVolumeClaim{}) if err := m.VolumeClaimTemplates[len(m.VolumeClaimTemplates)-1].Unmarshal(data[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 5: if wireType != 2 { return fmt.Errorf(\"proto: wrong wireType = %d for field ServiceName\", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowGenerated } if iNdEx >= l { return io.ErrUnexpectedEOF } b := data[iNdEx] iNdEx++ stringLen |= (uint64(b) & 0x7F) << shift if b < 0x80 { break } } intStringLen := int(stringLen) if intStringLen < 0 { return ErrInvalidLengthGenerated } postIndex := iNdEx + intStringLen if postIndex > l { return io.ErrUnexpectedEOF } m.ServiceName = string(data[iNdEx:postIndex]) iNdEx = postIndex default: iNdEx = preIndex skippy, err := skipGenerated(data[iNdEx:]) if err != nil { return err } if skippy < 0 { return ErrInvalidLengthGenerated } if (iNdEx + skippy) > l { return io.ErrUnexpectedEOF } iNdEx += skippy } } if iNdEx > l { return io.ErrUnexpectedEOF } return nil } func (m *PetSetStatus) Unmarshal(data []byte) error { l := len(data) iNdEx := 0 for iNdEx < l { preIndex := iNdEx var wire uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowGenerated } if iNdEx >= l { return io.ErrUnexpectedEOF } b := data[iNdEx] iNdEx++ wire |= (uint64(b) & 0x7F) << shift if b < 0x80 { break } } fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { return fmt.Errorf(\"proto: PetSetStatus: wiretype end group for non-group\") } if fieldNum <= 0 { return fmt.Errorf(\"proto: PetSetStatus: illegal tag %d (wire type %d)\", fieldNum, wire) } switch fieldNum { case 1: if wireType != 0 { return fmt.Errorf(\"proto: wrong wireType = %d for field ObservedGeneration\", wireType) } var v int64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowGenerated } if iNdEx >= l { return io.ErrUnexpectedEOF } b := data[iNdEx] iNdEx++ v |= (int64(b) & 0x7F) << shift if b < 0x80 { break } } m.ObservedGeneration = &v case 2: if wireType != 0 { return fmt.Errorf(\"proto: wrong wireType = %d for field Replicas\", wireType) } m.Replicas = 0 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowGenerated } if iNdEx >= l { return io.ErrUnexpectedEOF } b := data[iNdEx] iNdEx++ m.Replicas |= (int32(b) & 0x7F) << shift if b < 0x80 { break } } default: iNdEx = preIndex skippy, err := skipGenerated(data[iNdEx:]) if err != nil { return err } if skippy < 0 { return ErrInvalidLengthGenerated } if (iNdEx + skippy) > l { return io.ErrUnexpectedEOF } iNdEx += skippy } } if iNdEx > l { return io.ErrUnexpectedEOF } return nil } func skipGenerated(data []byte) (n int, err error) { l := len(data) iNdEx := 0 for iNdEx < l { var wire uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return 0, ErrIntOverflowGenerated } if iNdEx >= l { return 0, io.ErrUnexpectedEOF } b := data[iNdEx] iNdEx++ wire |= (uint64(b) & 0x7F) << shift if b < 0x80 { break } } wireType := int(wire & 0x7) switch wireType { case 0: for shift := uint(0); ; shift += 7 { if shift >= 64 { return 0, ErrIntOverflowGenerated } if iNdEx >= l { return 0, io.ErrUnexpectedEOF } iNdEx++ if data[iNdEx-1] < 0x80 { break } } return iNdEx, nil case 1: iNdEx += 8 return iNdEx, nil case 2: var length int for shift := uint(0); ; shift += 7 { if shift >= 64 { return 0, ErrIntOverflowGenerated } if iNdEx >= l { return 0, io.ErrUnexpectedEOF } b := data[iNdEx] iNdEx++ length |= (int(b) & 0x7F) << shift if b < 0x80 { break } } iNdEx += length if length < 0 { return 0, ErrInvalidLengthGenerated } return iNdEx, nil case 3: for { var innerWire uint64 var start int = iNdEx for shift := uint(0); ; shift += 7 { if shift >= 64 { return 0, ErrIntOverflowGenerated } if iNdEx >= l { return 0, io.ErrUnexpectedEOF } b := data[iNdEx] iNdEx++ innerWire |= (uint64(b) & 0x7F) << shift if b < 0x80 { break } } innerWireType := int(innerWire & 0x7) if innerWireType == 4 { break } next, err := skipGenerated(data[start:]) if err != nil { return 0, err } iNdEx = start + next } return iNdEx, nil case 4: return iNdEx, nil case 5: iNdEx += 4 return iNdEx, nil default: return 0, fmt.Errorf(\"proto: illegal wireType %d\", wireType) } } panic(\"unreachable\") } var ( ErrInvalidLengthGenerated = fmt.Errorf(\"proto: negative length found during unmarshaling\") ErrIntOverflowGenerated = fmt.Errorf(\"proto: integer overflow\") ) ", "commid": "kubernetes_pr_24587"}], "negative_passages": []} {"query_id": "q-en-kubernetes-21b704a68514d2b400f1a4f479b12f48c6dc4bd890b9d4b14cffa5ee4c13d7d4", "query": "During\nany chance this is related to pass: https://k8s- fail: https://k8s- https://k8s-\nThis is because for some reason the submit queue rebuild didn't catch the flake before merging. Protobuf landed first, merging a new api object like petset should have failed.\nQuick fix is to rebuild the proto marshalers and push.\nAlso, hack/verify-generated- is broken, which probably is the root cause.\nLooking at it now\nThree problems: -generated-protobuf wasn't updated when we made update-generated-protobuf use docker, so it didn't fail out script was verifying the old API groups, not any of the new API groups that have been merged without protobuf serializations (although pkg/api/serialization_test should have failed, possibly the job didn't build with the latest master?), which caused the tests to fail\nSomething about the merge bot is broken - it merged without all the tests being green: . That's why this broke -\nSpawned since there is something wrong with update-generated-protobuf - will merge that to fix the failing test and then debug the generator in the morning.\nThe bot can race with pushes between when it starts testing and when it merges...", "positive_passages": [{"docid": "doc-en-kubernetes-f78551fdd9a516f286253b61bc6635bab2691d8df7070e2079c175b1dd4307b4", "text": " /* Copyright 2016 The Kubernetes Authors All rights reserved. Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ // This file was autogenerated by go-to-protobuf. Do not edit it manually! syntax = 'proto2'; package k8s.io.kubernetes.pkg.apis.apps.v1alpha1; import \"k8s.io/kubernetes/pkg/api/resource/generated.proto\"; import \"k8s.io/kubernetes/pkg/api/unversioned/generated.proto\"; import \"k8s.io/kubernetes/pkg/api/v1/generated.proto\"; import \"k8s.io/kubernetes/pkg/util/intstr/generated.proto\"; // Package-wide variables from generator \"generated\". option go_package = \"v1alpha1\"; // PetSet represents a set of pods with consistent identities. // Identities are defined as: // - Network: A single stable DNS and hostname. // - Storage: As many VolumeClaims as requested. // The PetSet guarantees that a given network identity will always // map to the same storage identity. PetSet is currently in alpha // and subject to change without notice. message PetSet { optional k8s.io.kubernetes.pkg.api.v1.ObjectMeta metadata = 1; // Spec defines the desired identities of pets in this set. optional PetSetSpec spec = 2; // Status is the current status of Pets in this PetSet. This data // may be out of date by some window of time. optional PetSetStatus status = 3; } // PetSetList is a collection of PetSets. message PetSetList { optional k8s.io.kubernetes.pkg.api.unversioned.ListMeta metadata = 1; repeated PetSet items = 2; } // A PetSetSpec is the specification of a PetSet. message PetSetSpec { // Replicas is the desired number of replicas of the given Template. // These are replicas in the sense that they are instantiations of the // same Template, but individual replicas also have a consistent identity. // If unspecified, defaults to 1. // TODO: Consider a rename of this field. optional int32 replicas = 1; // Selector is a label query over pods that should match the replica count. // If empty, defaulted to labels on the pod template. // More info: http://releases.k8s.io/HEAD/docs/user-guide/labels.md#label-selectors optional k8s.io.kubernetes.pkg.api.unversioned.LabelSelector selector = 2; // Template is the object that describes the pod that will be created if // insufficient replicas are detected. Each pod stamped out by the PetSet // will fulfill this Template, but have a unique identity from the rest // of the PetSet. optional k8s.io.kubernetes.pkg.api.v1.PodTemplateSpec template = 3; // VolumeClaimTemplates is a list of claims that pets are allowed to reference. // The PetSet controller is responsible for mapping network identities to // claims in a way that maintains the identity of a pet. Every claim in // this list must have at least one matching (by name) volumeMount in one // container in the template. A claim in this list takes precedence over // any volumes in the template, with the same name. // TODO: Define the behavior if a claim already exists with the same name. repeated k8s.io.kubernetes.pkg.api.v1.PersistentVolumeClaim volumeClaimTemplates = 4; // ServiceName is the name of the service that governs this PetSet. // This service must exist before the PetSet, and is responsible for // the network identity of the set. Pets get DNS/hostnames that follow the // pattern: pet-specific-string.serviceName.default.svc.cluster.local // where \"pet-specific-string\" is managed by the PetSet controller. optional string serviceName = 5; } // PetSetStatus represents the current state of a PetSet. message PetSetStatus { // most recent generation observed by this autoscaler. optional int64 observedGeneration = 1; // Replicas is the number of actual replicas. optional int32 replicas = 2; } ", "commid": "kubernetes_pr_24587"}], "negative_passages": []} {"query_id": "q-en-kubernetes-b0a43404a90a4502792529d6f593d8f07cbefedf62cf2856b9027033dd071c9f", "query": "Hi, Deployment patching with doesn't show up in the deployment rollout history: first: after: Is it a bug? Should it show up? Thanks, Peter\ncc\nWhich kubernetes and kubectl releases? Did the patch actually trigger a rollout?\nI also run the command from CI with the latest . So the problem affects the v1.2.3 kubectl as well\nWe should also fix the way we sort revisions - 10 should be after 9, not 1.\nCan confirm this. The issue is that the patch isn't updating , so the rollout history will just copy the value from the previous version. If you started with you get that for the patches. In my case I have a different line repeated. Builds 7 and 8 were built with different . I'm not sure why the patch worked once. Command I am calling is:\nIf no one is working on patch - i can try to make one.\nI'll like to work on this issue if you haven't started yet\nActually never mind I can look at something else :)\nHi, i see that you are the author of the following comment. I think that currently every valid change cause is discarded because of version conflict. Maybe you have some information why it happens?\ncommand instead generates a patch to update then applies that.\nIt was the same for patch, but for some reason was changed.\nThe intent was two fold. First, prevent the change-cause from ever failing patch the CLI command. change-cause isn't authoritative and the patch took hold, so a non-zero return code is inappropriate. Second, if the resource version change post-patch, before the update, the change-cause is unknown, something else touched it after the fact. A command that issues a instead of an could very easily break the history since patches will attempt to reapply themselves regardless of resource version in the face of conflicts. The object returned from the patch operation is the one that we use to avoid hitting a resource conflict with ourselves. Truth be told, I don't feel very strongly about getting the history right since it can be changed anyone with write access to the object, includes secrets that should be elided (--token for instance), ignores environment variables which affect outcome, ignores the actual path used to locate the binary you're running, and references paths that are only good from one location on one machine, so if you want to switch it back to a second patch, I don't mind as long as a failure to update the object doesn't cause a non-zero return from the command.\nCould there be a race between the call and the deployment updating itself to action the changes? On 19 May 2016 9:47 pm, \"David Eads\" wrote:", "positive_passages": [{"docid": "doc-en-kubernetes-4c8b3952c76ccd4bec02c50ef2e9530c51c6ed23236f4f3b7bde1efe3e1eb3bc", "text": "if !options.Local { helper := resource.NewHelper(client, mapping) patchedObject, err := helper.Patch(namespace, name, patchType, patchBytes) _, err := helper.Patch(namespace, name, patchType, patchBytes) if err != nil { return err } if cmdutil.ShouldRecord(cmd, info) { if err := cmdutil.RecordChangeCause(patchedObject, f.Command()); err == nil { // don't return an error on failure. The patch itself succeeded, its only the hint for that change that failed // don't bother checking for failures of this replace, because a failure to indicate the hint doesn't fail the command // also, don't force the replacement. If the replacement fails on a resourceVersion conflict, then it means this // record hint is likely to be invalid anyway, so avoid the bad hint resource.NewHelper(client, mapping).Replace(namespace, name, false, patchedObject) // don't return an error on failure. The patch itself succeeded, its only the hint for that change that failed // don't bother checking for failures of this replace, because a failure to indicate the hint doesn't fail the command // also, don't force the replacement. If the replacement fails on a resourceVersion conflict, then it means this // record hint is likely to be invalid anyway, so avoid the bad hint patch, err := cmdutil.ChangeResourcePatch(info, f.Command()) if err == nil { helper.Patch(info.Namespace, info.Name, api.StrategicMergePatchType, patch) } } count++", "commid": "kubernetes_pr_25876"}], "negative_passages": []} {"query_id": "q-en-kubernetes-b0a43404a90a4502792529d6f593d8f07cbefedf62cf2856b9027033dd071c9f", "query": "Hi, Deployment patching with doesn't show up in the deployment rollout history: first: after: Is it a bug? Should it show up? Thanks, Peter\ncc\nWhich kubernetes and kubectl releases? Did the patch actually trigger a rollout?\nI also run the command from CI with the latest . So the problem affects the v1.2.3 kubectl as well\nWe should also fix the way we sort revisions - 10 should be after 9, not 1.\nCan confirm this. The issue is that the patch isn't updating , so the rollout history will just copy the value from the previous version. If you started with you get that for the patches. In my case I have a different line repeated. Builds 7 and 8 were built with different . I'm not sure why the patch worked once. Command I am calling is:\nIf no one is working on patch - i can try to make one.\nI'll like to work on this issue if you haven't started yet\nActually never mind I can look at something else :)\nHi, i see that you are the author of the following comment. I think that currently every valid change cause is discarded because of version conflict. Maybe you have some information why it happens?\ncommand instead generates a patch to update then applies that.\nIt was the same for patch, but for some reason was changed.\nThe intent was two fold. First, prevent the change-cause from ever failing patch the CLI command. change-cause isn't authoritative and the patch took hold, so a non-zero return code is inappropriate. Second, if the resource version change post-patch, before the update, the change-cause is unknown, something else touched it after the fact. A command that issues a instead of an could very easily break the history since patches will attempt to reapply themselves regardless of resource version in the face of conflicts. The object returned from the patch operation is the one that we use to avoid hitting a resource conflict with ourselves. Truth be told, I don't feel very strongly about getting the history right since it can be changed anyone with write access to the object, includes secrets that should be elided (--token for instance), ignores environment variables which affect outcome, ignores the actual path used to locate the binary you're running, and references paths that are only good from one location on one machine, so if you want to switch it back to a second patch, I don't mind as long as a failure to update the object doesn't cause a non-zero return from the command.\nCould there be a race between the call and the deployment updating itself to action the changes? On 19 May 2016 9:47 pm, \"David Eads\" wrote:", "positive_passages": [{"docid": "doc-en-kubernetes-552d5a4f04229d194333866a2519858fcfa775d7c94f914661cab31949614a3f", "text": "} // TODO: if we ever want to go generic, this allows a clean -o yaml without trying to print columns or anything // rawExtension := &runtime.Unknown{ // \tRaw: originalPatchedObjJS, //\tRaw: originalPatchedObjJS, // } printer, err := f.PrinterForMapping(cmd, mapping, false)", "commid": "kubernetes_pr_25876"}], "negative_passages": []} {"query_id": "q-en-kubernetes-69b2bd17365c09a24925384c93384c1277cb9057b6c5ebf013176177cc0220c1", "query": "Running show failures at times: The expected queue is while got , see code here: I would give an idea that these cases have something similar, so referring and /cc\ncc\nI think so as well.\nThe debugging I to the test is obviously not correct - I'll fix that separately.\nI wonder if on very short intervals we're not getting a goroutine schedule swap, which causes this race to manifest (based on comments in the other thread).\nDoes heap insertion guarantee order if the heap value is equivalent? I don't think so.\nThis seems like it is that now() is not monotonically increasing, so we violate the add ordering condition. Since we already have a lock in most cases, we may want to either wrap now() to make it monotonic inc, or have the queue check for equality in the heap and add one nanosecond until there is no matching heap item. On Apr 29, 2016, at 7:12 PM, Marek Grabowski wrote: cc \u2014 You are receiving this because you were mentioned. Reply to this email directly or view it on GitHub\nThanks for clarification, so it looks like we need to give some time break between doing , I'll see whether that helps.\nI believe this is a bug in rate limited queue, so we definitely need to fix it (esp. since it's happening in the real world)\nyou wrote Would you be a good person to pick an owner for this bug? Giving P1 since we think happens in real world and I don't understand severity.\nand I are probably the two who have touched it, I think as well but may be misremembering. It should be a straightforward fix to make now monotonic, I don't know when I can get to it\nMe neither. If it's P1 I most likely won't have time before v1.3.\nI think override the function call in as you did would resolve this issue, what do you think, if you agree with this, I am gonna to try that.\nIt would fix it, although I think this is broken in the core code, not just in the test. We suspect real workloads have been impacted by this.\nOpened to determine whether we actually need to fix the RLQ\nAssigning to per", "positive_passages": [{"docid": "doc-en-kubernetes-923cb8ada8358a523094eae019170a1e5e8ca2b7f6bba855c866e25d4dc620d5", "text": "} func TestDelNode(t *testing.T) { defer func() { now = time.Now }() var tick int64 now = func() time.Time { t := time.Unix(tick, 0) tick++ return t } evictor := NewRateLimitedTimedQueue(flowcontrol.NewFakeAlwaysRateLimiter()) evictor.Add(\"first\") evictor.Add(\"second\")", "commid": "kubernetes_pr_25636"}], "negative_passages": []} {"query_id": "q-en-kubernetes-7e9c2f49189e70f3f181de8a8d3d5cbdc6480a56bdc940348429fa13efb7c56f", "query": "When i doing an elasticsearch aggs with the following statement: then, i got this: Actually, i want the pod names like this:\nI have solved this problem by adding a index template for k8s logs. Now, I can aggregation pod names of the same namespace.\nHow are you sending your logs to elasticsearch? Using the default fluentd configuration?\nFirst, I use the default fluentd configuration. Secondly, My application is written with nodejs language, using library.", "positive_passages": [{"docid": "doc-en-kubernetes-b67a2b56f7ff6553e006aec95fcf10eec7d4745e8dff4dd997ef8fb60423670a", "text": "tar xf elasticsearch-1.5.2.tar.gz && rm elasticsearch-1.5.2.tar.gz RUN mkdir -p /elasticsearch-1.5.2/config/templates COPY elasticsearch.yml /elasticsearch-1.5.2/config/elasticsearch.yml COPY template-k8s-logstash.json /elasticsearch-1.5.2/config/templates/template-k8s-logstash.json COPY run.sh / COPY elasticsearch_logging_discovery /", "commid": "kubernetes_pr_25309"}], "negative_passages": []} {"query_id": "q-en-kubernetes-7e9c2f49189e70f3f181de8a8d3d5cbdc6480a56bdc940348429fa13efb7c56f", "query": "When i doing an elasticsearch aggs with the following statement: then, i got this: Actually, i want the pod names like this:\nI have solved this problem by adding a index template for k8s logs. Now, I can aggregation pod names of the same namespace.\nHow are you sending your logs to elasticsearch? Using the default fluentd configuration?\nFirst, I use the default fluentd configuration. Secondly, My application is written with nodejs language, using library.", "positive_passages": [{"docid": "doc-en-kubernetes-d349bc58d26d053ff0a1ab139347e36e4e8201866e5ef02e1ea52776812895ac", "text": " { \"template_k8s_logstash\" : { \"template\" : \"logstash-*\", \"settings\" : { \"index.refresh_interval\" : \"5s\" }, \"mappings\" : { \"_default_\" : { \"dynamic_templates\" : [ { \"kubernetes_field\" : { \"path_match\" : \"kubernetes.*\", \"mapping\" : { \"type\" : \"string\", \"index\" : \"not_analyzed\" } } } ] } } } } ", "commid": "kubernetes_pr_25309"}], "negative_passages": []} {"query_id": "q-en-kubernetes-42f599f347213b163fe7149497631054b9aebf3a38dc08dc679e08b10e38a774", "query": "is and does not honor the platform it is invoked on. I noticed this when I needed to install etcd on my Mac (which, in turn, was required by and transitively by the pre-commit hooks) and later failed to execute it due to an incompatible binary.\nOh wow, that explains my failures. I ended up using homebrew.", "positive_passages": [{"docid": "doc-en-kubernetes-ba84c6f4da529065660bb3cfffed18866b79cbdae8000f458458ad8be2d7962e", "text": "kube::etcd::install() { ( cd \"${KUBE_ROOT}/third_party\" curl -fsSL --retry 3 https://github.com/coreos/etcd/releases/download/v${ETCD_VERSION}/etcd-v${ETCD_VERSION}-linux-amd64.tar.gz | tar xzf - ln -fns \"etcd-v${ETCD_VERSION}-linux-amd64\" etcd if [[ $(uname) == \"Darwin\" ]]; then download_file=\"etcd-v${ETCD_VERSION}-darwin-amd64.zip\" curl -fsSLO --retry 3 https://github.com/coreos/etcd/releases/download/v${ETCD_VERSION}/\"${download_file}\" unzip -o \"${download_file}\" ln -fns \"etcd-v${ETCD_VERSION}-darwin-amd64\" etcd rm \"${download_file}\" else curl -fsSL --retry 3 https://github.com/coreos/etcd/releases/download/v${ETCD_VERSION}/etcd-v${ETCD_VERSION}-linux-amd64.tar.gz | tar xzf - ln -fns \"etcd-v${ETCD_VERSION}-linux-amd64\" etcd fi kube::log::info \"etcd v${ETCD_VERSION} installed. To use:\" kube::log::info \"export PATH=${PATH}:$(pwd)/etcd\" )", "commid": "kubernetes_pr_26047"}], "negative_passages": []} {"query_id": "q-en-kubernetes-5056a2515a29befdf589a74f13db031ed4142bbabfa8d2c37cafac2de4fccaea", "query": "I started the federation-apiserver and went to and got the following output: Expected: I should have got an empty v1.ServiceList Any idea on what we are doing wrong? Have you guys seen this error?\nI think your deep copiers or conversion functions are not getting registered properly. If you look at my change in , some places are inappropriately registering with api.Scheme instead of the passed in scheme. Maybe you have that problem or a similar one, since I think you guys copied some code. , Nikhil Jindal wrote:\nDeep copy generation was slightly broken as well - you may want to double check that they are registered. On May 23, 2016, at 5:50 PM, Daniel Smith wrote: I think your deep copiers or conversion functions are not getting registered properly. If you look at my change in , some places are inappropriately registering with api.Scheme instead of the passed in scheme. Maybe you have that problem or a similar one, since I think you guys copied some code. , Nikhil Jindal wrote: \u2014 You are receiving this because you are on a team that was mentioned. Reply to this email directly or view it on GitHub\nI got the same error with latest branch, as I was swamping on service controller, so I use a previous version and worked around it.\ndeep copies were indeed missing, but adding them hasnt fixed the issue. Will debug more. is my PR to add the deep copies cc Please feel free to file issues for any such problem you face. We need to fix all these issues. Its good if we know about them early. Thanks!\nfixes this", "positive_passages": [{"docid": "doc-en-kubernetes-6b84738a94a23af0f56a7c8cf31220024d7d74d7993be985041a14472692bb8b", "text": " /* Copyright 2016 The Kubernetes Authors All rights reserved. Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ package core import ( \"k8s.io/kubernetes/pkg/api\" \"k8s.io/kubernetes/pkg/fields\" \"k8s.io/kubernetes/pkg/labels\" \"k8s.io/kubernetes/pkg/runtime\" ) func addDefaultingFuncs(scheme *runtime.Scheme) { scheme.AddDefaultingFuncs( func(obj *api.ListOptions) { if obj.LabelSelector == nil { obj.LabelSelector = labels.Everything() } if obj.FieldSelector == nil { obj.FieldSelector = fields.Everything() } }, ) } func addConversionFuncs(scheme *runtime.Scheme) { scheme.AddConversionFuncs( api.Convert_unversioned_TypeMeta_To_unversioned_TypeMeta, api.Convert_unversioned_ListMeta_To_unversioned_ListMeta, api.Convert_intstr_IntOrString_To_intstr_IntOrString, api.Convert_unversioned_Time_To_unversioned_Time, api.Convert_Slice_string_To_unversioned_Time, api.Convert_string_To_labels_Selector, api.Convert_string_To_fields_Selector, api.Convert_Pointer_bool_To_bool, api.Convert_bool_To_Pointer_bool, api.Convert_Pointer_string_To_string, api.Convert_string_To_Pointer_string, api.Convert_labels_Selector_To_string, api.Convert_fields_Selector_To_string, api.Convert_resource_Quantity_To_resource_Quantity, ) } ", "commid": "kubernetes_pr_26142"}], "negative_passages": []} {"query_id": "q-en-kubernetes-5056a2515a29befdf589a74f13db031ed4142bbabfa8d2c37cafac2de4fccaea", "query": "I started the federation-apiserver and went to and got the following output: Expected: I should have got an empty v1.ServiceList Any idea on what we are doing wrong? Have you guys seen this error?\nI think your deep copiers or conversion functions are not getting registered properly. If you look at my change in , some places are inappropriately registering with api.Scheme instead of the passed in scheme. Maybe you have that problem or a similar one, since I think you guys copied some code. , Nikhil Jindal wrote:\nDeep copy generation was slightly broken as well - you may want to double check that they are registered. On May 23, 2016, at 5:50 PM, Daniel Smith wrote: I think your deep copiers or conversion functions are not getting registered properly. If you look at my change in , some places are inappropriately registering with api.Scheme instead of the passed in scheme. Maybe you have that problem or a similar one, since I think you guys copied some code. , Nikhil Jindal wrote: \u2014 You are receiving this because you are on a team that was mentioned. Reply to this email directly or view it on GitHub\nI got the same error with latest branch, as I was swamping on service controller, so I use a previous version and worked around it.\ndeep copies were indeed missing, but adding them hasnt fixed the issue. Will debug more. is my PR to add the deep copies cc Please feel free to file issues for any such problem you face. We need to fix all these issues. Its good if we know about them early. Thanks!\nfixes this", "positive_passages": [{"docid": "doc-en-kubernetes-7594f092355a5bc5de3628251b80f8ee7d09e6e990b3b8284693c9a26ddfe3ac", "text": " /* Copyright 2016 The Kubernetes Authors All rights reserved. Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ package core import ( \"k8s.io/kubernetes/pkg/api\" \"k8s.io/kubernetes/pkg/runtime\" ) func addDeepCopyFuncs(scheme *runtime.Scheme) { if err := scheme.AddGeneratedDeepCopyFuncs( api.DeepCopy_api_DeleteOptions, api.DeepCopy_api_ExportOptions, api.DeepCopy_api_List, api.DeepCopy_api_ListOptions, api.DeepCopy_api_ObjectMeta, api.DeepCopy_api_ObjectReference, api.DeepCopy_api_OwnerReference, api.DeepCopy_api_Service, api.DeepCopy_api_ServiceList, api.DeepCopy_api_ServicePort, api.DeepCopy_api_ServiceSpec, api.DeepCopy_api_ServiceStatus, ); err != nil { // if one of the deep copy functions is malformed, detect it immediately. panic(err) } } ", "commid": "kubernetes_pr_26142"}], "negative_passages": []} {"query_id": "q-en-kubernetes-5056a2515a29befdf589a74f13db031ed4142bbabfa8d2c37cafac2de4fccaea", "query": "I started the federation-apiserver and went to and got the following output: Expected: I should have got an empty v1.ServiceList Any idea on what we are doing wrong? Have you guys seen this error?\nI think your deep copiers or conversion functions are not getting registered properly. If you look at my change in , some places are inappropriately registering with api.Scheme instead of the passed in scheme. Maybe you have that problem or a similar one, since I think you guys copied some code. , Nikhil Jindal wrote:\nDeep copy generation was slightly broken as well - you may want to double check that they are registered. On May 23, 2016, at 5:50 PM, Daniel Smith wrote: I think your deep copiers or conversion functions are not getting registered properly. If you look at my change in , some places are inappropriately registering with api.Scheme instead of the passed in scheme. Maybe you have that problem or a similar one, since I think you guys copied some code. , Nikhil Jindal wrote: \u2014 You are receiving this because you are on a team that was mentioned. Reply to this email directly or view it on GitHub\nI got the same error with latest branch, as I was swamping on service controller, so I use a previous version and worked around it.\ndeep copies were indeed missing, but adding them hasnt fixed the issue. Will debug more. is my PR to add the deep copies cc Please feel free to file issues for any such problem you face. We need to fix all these issues. Its good if we know about them early. Thanks!\nfixes this", "positive_passages": [{"docid": "doc-en-kubernetes-4bb9d79fd048482eaecc5f7b756e34b0dbbbd4b13e8aa133619f7ccf0277763a", "text": "&unversioned.APIGroup{}, &unversioned.APIResourceList{}, ) addDeepCopyFuncs(scheme) addDefaultingFuncs(scheme) addConversionFuncs(scheme) }", "commid": "kubernetes_pr_26142"}], "negative_passages": []} {"query_id": "q-en-kubernetes-5056a2515a29befdf589a74f13db031ed4142bbabfa8d2c37cafac2de4fccaea", "query": "I started the federation-apiserver and went to and got the following output: Expected: I should have got an empty v1.ServiceList Any idea on what we are doing wrong? Have you guys seen this error?\nI think your deep copiers or conversion functions are not getting registered properly. If you look at my change in , some places are inappropriately registering with api.Scheme instead of the passed in scheme. Maybe you have that problem or a similar one, since I think you guys copied some code. , Nikhil Jindal wrote:\nDeep copy generation was slightly broken as well - you may want to double check that they are registered. On May 23, 2016, at 5:50 PM, Daniel Smith wrote: I think your deep copiers or conversion functions are not getting registered properly. If you look at my change in , some places are inappropriately registering with api.Scheme instead of the passed in scheme. Maybe you have that problem or a similar one, since I think you guys copied some code. , Nikhil Jindal wrote: \u2014 You are receiving this because you are on a team that was mentioned. Reply to this email directly or view it on GitHub\nI got the same error with latest branch, as I was swamping on service controller, so I use a previous version and worked around it.\ndeep copies were indeed missing, but adding them hasnt fixed the issue. Will debug more. is my PR to add the deep copies cc Please feel free to file issues for any such problem you face. We need to fix all these issues. Its good if we know about them early. Thanks!\nfixes this", "positive_passages": [{"docid": "doc-en-kubernetes-9193350c46f90fcba863fe730753ca88a2c7e627b9f6e798745899178328a0f9", "text": "import ( \"fmt\" \"k8s.io/kubernetes/pkg/api\" \"k8s.io/kubernetes/pkg/api/v1\" \"k8s.io/kubernetes/pkg/runtime\" )", "commid": "kubernetes_pr_26142"}], "negative_passages": []} {"query_id": "q-en-kubernetes-5056a2515a29befdf589a74f13db031ed4142bbabfa8d2c37cafac2de4fccaea", "query": "I started the federation-apiserver and went to and got the following output: Expected: I should have got an empty v1.ServiceList Any idea on what we are doing wrong? Have you guys seen this error?\nI think your deep copiers or conversion functions are not getting registered properly. If you look at my change in , some places are inappropriately registering with api.Scheme instead of the passed in scheme. Maybe you have that problem or a similar one, since I think you guys copied some code. , Nikhil Jindal wrote:\nDeep copy generation was slightly broken as well - you may want to double check that they are registered. On May 23, 2016, at 5:50 PM, Daniel Smith wrote: I think your deep copiers or conversion functions are not getting registered properly. If you look at my change in , some places are inappropriately registering with api.Scheme instead of the passed in scheme. Maybe you have that problem or a similar one, since I think you guys copied some code. , Nikhil Jindal wrote: \u2014 You are receiving this because you are on a team that was mentioned. Reply to this email directly or view it on GitHub\nI got the same error with latest branch, as I was swamping on service controller, so I use a previous version and worked around it.\ndeep copies were indeed missing, but adding them hasnt fixed the issue. Will debug more. is my PR to add the deep copies cc Please feel free to file issues for any such problem you face. We need to fix all these issues. Its good if we know about them early. Thanks!\nfixes this", "positive_passages": [{"docid": "doc-en-kubernetes-607e84ced79aa2e76fd42d37a38cdd726e203cf299d4c370a9d77a45aac10bba", "text": "func addConversionFuncs(scheme *runtime.Scheme) { // Add non-generated conversion functions err := scheme.AddConversionFuncs( v1.Convert_api_ServiceSpec_To_v1_ServiceSpec, v1.Convert_v1_DeleteOptions_To_api_DeleteOptions, v1.Convert_api_DeleteOptions_To_v1_DeleteOptions, v1.Convert_v1_ExportOptions_To_api_ExportOptions, v1.Convert_api_ExportOptions_To_v1_ExportOptions, v1.Convert_v1_List_To_api_List, v1.Convert_api_List_To_v1_List, v1.Convert_v1_ListOptions_To_api_ListOptions, v1.Convert_api_ListOptions_To_v1_ListOptions, v1.Convert_v1_ObjectFieldSelector_To_api_ObjectFieldSelector, v1.Convert_api_ObjectFieldSelector_To_v1_ObjectFieldSelector, v1.Convert_v1_ObjectMeta_To_api_ObjectMeta, v1.Convert_api_ObjectMeta_To_v1_ObjectMeta, v1.Convert_v1_ObjectReference_To_api_ObjectReference, v1.Convert_api_ObjectReference_To_v1_ObjectReference, v1.Convert_v1_OwnerReference_To_api_OwnerReference, v1.Convert_api_OwnerReference_To_v1_OwnerReference, v1.Convert_v1_Service_To_api_Service, v1.Convert_api_Service_To_v1_Service, v1.Convert_v1_ServiceList_To_api_ServiceList, v1.Convert_api_ServiceList_To_v1_ServiceList, v1.Convert_v1_ServicePort_To_api_ServicePort, v1.Convert_api_ServicePort_To_v1_ServicePort, v1.Convert_v1_ServiceProxyOptions_To_api_ServiceProxyOptions, v1.Convert_api_ServiceProxyOptions_To_v1_ServiceProxyOptions, v1.Convert_v1_ServiceSpec_To_api_ServiceSpec, v1.Convert_api_ServiceSpec_To_v1_ServiceSpec, v1.Convert_v1_ServiceStatus_To_api_ServiceStatus, v1.Convert_api_ServiceStatus_To_v1_ServiceStatus, ) if err != nil { // If one of the conversion functions is malformed, detect it immediately.", "commid": "kubernetes_pr_26142"}], "negative_passages": []} {"query_id": "q-en-kubernetes-5056a2515a29befdf589a74f13db031ed4142bbabfa8d2c37cafac2de4fccaea", "query": "I started the federation-apiserver and went to and got the following output: Expected: I should have got an empty v1.ServiceList Any idea on what we are doing wrong? Have you guys seen this error?\nI think your deep copiers or conversion functions are not getting registered properly. If you look at my change in , some places are inappropriately registering with api.Scheme instead of the passed in scheme. Maybe you have that problem or a similar one, since I think you guys copied some code. , Nikhil Jindal wrote:\nDeep copy generation was slightly broken as well - you may want to double check that they are registered. On May 23, 2016, at 5:50 PM, Daniel Smith wrote: I think your deep copiers or conversion functions are not getting registered properly. If you look at my change in , some places are inappropriately registering with api.Scheme instead of the passed in scheme. Maybe you have that problem or a similar one, since I think you guys copied some code. , Nikhil Jindal wrote: \u2014 You are receiving this because you are on a team that was mentioned. Reply to this email directly or view it on GitHub\nI got the same error with latest branch, as I was swamping on service controller, so I use a previous version and worked around it.\ndeep copies were indeed missing, but adding them hasnt fixed the issue. Will debug more. is my PR to add the deep copies cc Please feel free to file issues for any such problem you face. We need to fix all these issues. Its good if we know about them early. Thanks!\nfixes this", "positive_passages": [{"docid": "doc-en-kubernetes-730e16fcf0872634daf133e83372c62bab09568914217adae36e316b4301ad9a", "text": "for _, kind := range []string{ \"Service\", } { err = api.Scheme.AddFieldLabelConversionFunc(\"v1\", kind, err = scheme.AddFieldLabelConversionFunc(\"v1\", kind, func(label, value string) (string, string, error) { switch label { case \"metadata.namespace\",", "commid": "kubernetes_pr_26142"}], "negative_passages": []} {"query_id": "q-en-kubernetes-5056a2515a29befdf589a74f13db031ed4142bbabfa8d2c37cafac2de4fccaea", "query": "I started the federation-apiserver and went to and got the following output: Expected: I should have got an empty v1.ServiceList Any idea on what we are doing wrong? Have you guys seen this error?\nI think your deep copiers or conversion functions are not getting registered properly. If you look at my change in , some places are inappropriately registering with api.Scheme instead of the passed in scheme. Maybe you have that problem or a similar one, since I think you guys copied some code. , Nikhil Jindal wrote:\nDeep copy generation was slightly broken as well - you may want to double check that they are registered. On May 23, 2016, at 5:50 PM, Daniel Smith wrote: I think your deep copiers or conversion functions are not getting registered properly. If you look at my change in , some places are inappropriately registering with api.Scheme instead of the passed in scheme. Maybe you have that problem or a similar one, since I think you guys copied some code. , Nikhil Jindal wrote: \u2014 You are receiving this because you are on a team that was mentioned. Reply to this email directly or view it on GitHub\nI got the same error with latest branch, as I was swamping on service controller, so I use a previous version and worked around it.\ndeep copies were indeed missing, but adding them hasnt fixed the issue. Will debug more. is my PR to add the deep copies cc Please feel free to file issues for any such problem you face. We need to fix all these issues. Its good if we know about them early. Thanks!\nfixes this", "positive_passages": [{"docid": "doc-en-kubernetes-71d428ebfbc7fa4fb0807eb410ca7f2bf521c711473b4462db452802b18c4739", "text": " /* Copyright 2016 The Kubernetes Authors All rights reserved. Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ package v1 import ( \"k8s.io/kubernetes/pkg/api/v1\" \"k8s.io/kubernetes/pkg/runtime\" ) func addDeepCopyFuncs(scheme *runtime.Scheme) { if err := scheme.AddGeneratedDeepCopyFuncs( v1.DeepCopy_v1_DeleteOptions, v1.DeepCopy_v1_ExportOptions, v1.DeepCopy_v1_List, v1.DeepCopy_v1_ListOptions, v1.DeepCopy_v1_ObjectMeta, v1.DeepCopy_v1_ObjectReference, v1.DeepCopy_v1_OwnerReference, v1.DeepCopy_v1_Service, v1.DeepCopy_v1_ServiceList, v1.DeepCopy_v1_ServicePort, v1.DeepCopy_v1_ServiceSpec, v1.DeepCopy_v1_ServiceStatus, ); err != nil { // if one of the deep copy functions is malformed, detect it immediately. panic(err) } } ", "commid": "kubernetes_pr_26142"}], "negative_passages": []} {"query_id": "q-en-kubernetes-5056a2515a29befdf589a74f13db031ed4142bbabfa8d2c37cafac2de4fccaea", "query": "I started the federation-apiserver and went to and got the following output: Expected: I should have got an empty v1.ServiceList Any idea on what we are doing wrong? Have you guys seen this error?\nI think your deep copiers or conversion functions are not getting registered properly. If you look at my change in , some places are inappropriately registering with api.Scheme instead of the passed in scheme. Maybe you have that problem or a similar one, since I think you guys copied some code. , Nikhil Jindal wrote:\nDeep copy generation was slightly broken as well - you may want to double check that they are registered. On May 23, 2016, at 5:50 PM, Daniel Smith wrote: I think your deep copiers or conversion functions are not getting registered properly. If you look at my change in , some places are inappropriately registering with api.Scheme instead of the passed in scheme. Maybe you have that problem or a similar one, since I think you guys copied some code. , Nikhil Jindal wrote: \u2014 You are receiving this because you are on a team that was mentioned. Reply to this email directly or view it on GitHub\nI got the same error with latest branch, as I was swamping on service controller, so I use a previous version and worked around it.\ndeep copies were indeed missing, but adding them hasnt fixed the issue. Will debug more. is my PR to add the deep copies cc Please feel free to file issues for any such problem you face. We need to fix all these issues. Its good if we know about them early. Thanks!\nfixes this", "positive_passages": [{"docid": "doc-en-kubernetes-0c27dc9269463d68be33f02fb16abc2a50d57f28e79ff20b73b070221e6ffb90", "text": "addKnownTypes(scheme) addConversionFuncs(scheme) addDefaultingFuncs(scheme) addDeepCopyFuncs(scheme) } // Adds the list of known types to api.Scheme.", "commid": "kubernetes_pr_26142"}], "negative_passages": []} {"query_id": "q-en-kubernetes-c5413e82f6f4d07df951c5f23d8ac5d09ba356ec19c2b239217d4746388b00db", "query": "https://k8s- Failed: [] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file {Kubernetes e2e suite}\nhttps://k8s- Failed: [] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file {Kubernetes e2e suite}\nhttps://k8s- Failed: [] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file {Kubernetes e2e suite}\nhttps://k8s- Failed: [] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file {Kubernetes e2e suite}\nAbhishek ought not be responsible for this any more.\nhttps://k8s- Failed: [] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file {Kubernetes e2e suite}\nhttps://k8s- Failed: [] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file {Kubernetes e2e suite}\n[FLAKE-PING] This flaky-test issue would love to have more attention...\n[FLAKE-PING] This flaky-test issue would love to have more attention...\n[FLAKE-PING] This flaky-test issue would love to have more attention.\n[FLAKE-PING] This flaky-test issue would love to have more attention.\n[FLAKE-PING] This flaky-test issue would love to have more attention.\n[FLAKE-PING] This flaky-test issue would love to have more attention.\n[FLAKE-PING] This flaky-test issue would love to have more attention.\n[FLAKE-PING] This flaky-test issue would love to have more attention.\n[FLAKE-PING] This flaky-test issue would love to have more attention.\nhttps://k8s- Failed: [] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file {Kubernetes e2e suite}\nhttps://k8s- Failed: [] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file {Kubernetes e2e suite}\n[FLAKE-PING] This flaky-test issue would love to have more attention.\nhttps://k8s- Failed: [] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file {Kubernetes e2e suite}\nhttps://k8s- Failed: [] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file {Kubernetes e2e suite}\nhttps://k8s- Failed: [] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file {Kubernetes e2e suite}\n[FLAKE-PING] This flaky-test issue would love to have more attention.\nhttps://k8s- Failed: [] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file {Kubernetes e2e suite}\nhttps://k8s- Failed: [] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file {Kubernetes e2e suite}\nhttps://k8s- Failed: [] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file {Kubernetes e2e suite}\nhttps://k8s- Failed: [] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file {Kubernetes e2e suite}\n[FLAKE-PING] This flaky-test issue would love to have more attention.\nhttps://k8s- Failed: [] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file {Kubernetes e2e suite}\nhttps://k8s- Failed: [] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file {Kubernetes e2e suite}\nwhat is the status of this issue?\nhttps://k8s- Failed: [] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file {Kubernetes e2e suite}\nNo idea. This should have nothing to do with me.\nhttps://k8s- Failed: [] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file {Kubernetes e2e suite}\nAh kk just thought you might know something given the label.\nhttps://k8s- Failed: [] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file {Kubernetes e2e suite}\nI know it didn't get any attention last release :)\nStatus is that I'm trying to fix it... I some code to the test to help debug why the flake was occurring as I wasn't able to repro it on my local cluster.\nhttps://k8s- Failed: [] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file {Kubernetes e2e suite} Happened on a presubmit run in .\nhttps://k8s- Failed: [] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file {Kubernetes e2e suite}\nhttps://k8s- Failed: [] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file {Kubernetes e2e suite}\nThis seems to be related to issue", "positive_passages": [{"docid": "doc-en-kubernetes-7cd2e9a903ffb272eb913a3337912ccb3388d38e4f7b89795a67b5b463cf17f1", "text": "func assertManagedStatus( config *KubeletManagedHostConfig, podName string, expectedIsManaged bool, name string) { // See https://github.com/kubernetes/kubernetes/issues/27023 // TODO: workaround for https://github.com/kubernetes/kubernetes/issues/34256 // // Retry until timeout for the right contents of /etc/hosts to show // up. There may be a low probability race here. We still fail the // test if retry was necessary, but at least we will know whether or // not it resolves or seems to be a permanent condition. // // If /etc/hosts is properly mounted, then this will succeed // immediately. // Retry until timeout for the contents of /etc/hosts to show // up. Note: if /etc/hosts is properly mounted, then this will // succeed immediately. const retryTimeout = 30 * time.Second retryCount := 0 etcHostsContent := \"\" matched := false for startTime := time.Now(); time.Since(startTime) < retryTimeout; { etcHostsContent = config.getEtcHostsContent(podName, name) isManaged := strings.Contains(etcHostsContent, etcHostsPartialContent) if expectedIsManaged == isManaged { matched = true break return } glog.Errorf( glog.Warningf( \"For pod: %s, name: %s, expected %t, actual %t (/etc/hosts was %q), retryCount: %d\", podName, name, expectedIsManaged, isManaged, etcHostsContent, retryCount)", "commid": "kubernetes_pr_34357"}], "negative_passages": []} {"query_id": "q-en-kubernetes-c5413e82f6f4d07df951c5f23d8ac5d09ba356ec19c2b239217d4746388b00db", "query": "https://k8s- Failed: [] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file {Kubernetes e2e suite}\nhttps://k8s- Failed: [] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file {Kubernetes e2e suite}\nhttps://k8s- Failed: [] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file {Kubernetes e2e suite}\nhttps://k8s- Failed: [] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file {Kubernetes e2e suite}\nAbhishek ought not be responsible for this any more.\nhttps://k8s- Failed: [] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file {Kubernetes e2e suite}\nhttps://k8s- Failed: [] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file {Kubernetes e2e suite}\n[FLAKE-PING] This flaky-test issue would love to have more attention...\n[FLAKE-PING] This flaky-test issue would love to have more attention...\n[FLAKE-PING] This flaky-test issue would love to have more attention.\n[FLAKE-PING] This flaky-test issue would love to have more attention.\n[FLAKE-PING] This flaky-test issue would love to have more attention.\n[FLAKE-PING] This flaky-test issue would love to have more attention.\n[FLAKE-PING] This flaky-test issue would love to have more attention.\n[FLAKE-PING] This flaky-test issue would love to have more attention.\n[FLAKE-PING] This flaky-test issue would love to have more attention.\nhttps://k8s- Failed: [] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file {Kubernetes e2e suite}\nhttps://k8s- Failed: [] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file {Kubernetes e2e suite}\n[FLAKE-PING] This flaky-test issue would love to have more attention.\nhttps://k8s- Failed: [] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file {Kubernetes e2e suite}\nhttps://k8s- Failed: [] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file {Kubernetes e2e suite}\nhttps://k8s- Failed: [] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file {Kubernetes e2e suite}\n[FLAKE-PING] This flaky-test issue would love to have more attention.\nhttps://k8s- Failed: [] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file {Kubernetes e2e suite}\nhttps://k8s- Failed: [] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file {Kubernetes e2e suite}\nhttps://k8s- Failed: [] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file {Kubernetes e2e suite}\nhttps://k8s- Failed: [] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file {Kubernetes e2e suite}\n[FLAKE-PING] This flaky-test issue would love to have more attention.\nhttps://k8s- Failed: [] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file {Kubernetes e2e suite}\nhttps://k8s- Failed: [] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file {Kubernetes e2e suite}\nwhat is the status of this issue?\nhttps://k8s- Failed: [] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file {Kubernetes e2e suite}\nNo idea. This should have nothing to do with me.\nhttps://k8s- Failed: [] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file {Kubernetes e2e suite}\nAh kk just thought you might know something given the label.\nhttps://k8s- Failed: [] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file {Kubernetes e2e suite}\nI know it didn't get any attention last release :)\nStatus is that I'm trying to fix it... I some code to the test to help debug why the flake was occurring as I wasn't able to repro it on my local cluster.\nhttps://k8s- Failed: [] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file {Kubernetes e2e suite} Happened on a presubmit run in .\nhttps://k8s- Failed: [] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file {Kubernetes e2e suite}\nhttps://k8s- Failed: [] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file {Kubernetes e2e suite}\nThis seems to be related to issue", "positive_passages": [{"docid": "doc-en-kubernetes-9b19c9bacfa8a6878cf3519a632b8f7464e463be24090eac789eda0a8c7f263b", "text": "time.Sleep(100 * time.Millisecond) } if retryCount > 0 { if matched { conditionText := \"should\" if !expectedIsManaged { conditionText = \"should not\" } framework.Failf( \"/etc/hosts file %s be kubelet managed (name: %s, retries: %d). /etc/hosts contains %q\", conditionText, name, retryCount, etcHostsContent) } else { framework.Failf( \"had to retry %d times to get matching content in /etc/hosts (name: %s)\", retryCount, name) } if expectedIsManaged { framework.Failf( \"/etc/hosts file should be kubelet managed (name: %s, retries: %d). /etc/hosts contains %q\", name, retryCount, etcHostsContent) } else { framework.Failf( \"/etc/hosts file should no be kubelet managed (name: %s, retries: %d). /etc/hosts contains %q\", name, retryCount, etcHostsContent) } }", "commid": "kubernetes_pr_34357"}], "negative_passages": []} {"query_id": "q-en-kubernetes-7679d12cc281cd209833defbf14abe814f6b5c54a99eb0fd89353cf78522b104", "query": "Hello Kubernetes, The order of the columns when you get events confused me. I was thinking, Turns out it's just a visual problem where these events are sorted correctly (by ), but the first column is . I think the first column should be because that's the order the events are listed.\nThis also confused me when I first saw it - I think this ordering would be more intuitive and help users discover the aggregation feature logically, instead of first having to grapple with why events are \"out of order\"\nShould the order be always LASTSEEN,FIRSTSEEN or just when using -w? I have an ugly PR for the latter, but perhaps we should just go for the former.", "positive_passages": [{"docid": "doc-en-kubernetes-df84244d39276a9a203153ee2d6c383335e287c0a4e6d0269e1f13c2d4ed4bf7", "text": "var endpointColumns = []string{\"NAME\", \"ENDPOINTS\", \"AGE\"} var nodeColumns = []string{\"NAME\", \"STATUS\", \"AGE\"} var daemonSetColumns = []string{\"NAME\", \"DESIRED\", \"CURRENT\", \"NODE-SELECTOR\", \"AGE\"} var eventColumns = []string{\"FIRSTSEEN\", \"LASTSEEN\", \"COUNT\", \"NAME\", \"KIND\", \"SUBOBJECT\", \"TYPE\", \"REASON\", \"SOURCE\", \"MESSAGE\"} var eventColumns = []string{\"LASTSEEN\", \"FIRSTSEEN\", \"COUNT\", \"NAME\", \"KIND\", \"SUBOBJECT\", \"TYPE\", \"REASON\", \"SOURCE\", \"MESSAGE\"} var limitRangeColumns = []string{\"NAME\", \"AGE\"} var resourceQuotaColumns = []string{\"NAME\", \"AGE\"} var namespaceColumns = []string{\"NAME\", \"STATUS\", \"AGE\"}", "commid": "kubernetes_pr_27549"}], "negative_passages": []} {"query_id": "q-en-kubernetes-7679d12cc281cd209833defbf14abe814f6b5c54a99eb0fd89353cf78522b104", "query": "Hello Kubernetes, The order of the columns when you get events confused me. I was thinking, Turns out it's just a visual problem where these events are sorted correctly (by ), but the first column is . I think the first column should be because that's the order the events are listed.\nThis also confused me when I first saw it - I think this ordering would be more intuitive and help users discover the aggregation feature logically, instead of first having to grapple with why events are \"out of order\"\nShould the order be always LASTSEEN,FIRSTSEEN or just when using -w? I have an ugly PR for the latter, but perhaps we should just go for the former.", "positive_passages": [{"docid": "doc-en-kubernetes-cad1fa7eff47c69fcfd59f8fc7aa244645ffa6e26f50bc30b33322d5eb0ab108", "text": "if _, err := fmt.Fprintf( w, \"%st%st%dt%st%st%st%st%st%st%s\", FirstTimestamp, LastTimestamp, FirstTimestamp, event.Count, event.InvolvedObject.Name, event.InvolvedObject.Kind,", "commid": "kubernetes_pr_27549"}], "negative_passages": []} {"query_id": "q-en-kubernetes-7f95224274155bb39af2e05742451e79cefc1f176d05622797c3ba56ecf46808", "query": "is failing with: Examples:\nThis is the failing line of code...\nPASS: FAIL: Seems like this disappeared: Also this is flipped: we run the following\nmy leading theory is this is something in\nThe issue is that the above PR changed a bunch of to be and thus we no longer wait for getArchive() to complete before starting tests. And unfortunately the previous PR cannot be reverted\nAfter merging, kubelet-gce-e2e-ci still failed with the same issue. reopen the issue. Currently the failure blocked the merge-bot.\ndid stop the bleeding. Now kubelet-gce-e2e-ci is green: and merge-bot is unblocked.\nI think the \"go arc\" piece was a red herring. I suspect the issue is that the same PR tries to copy the binary into the archive directory, but PR stops building ginkgo, so the copy fails. That is why this line then fails...", "positive_passages": [{"docid": "doc-en-kubernetes-73295edc882161b8be65028842cfb0cb0d438b3beb196432dc1c73796c43fca1", "text": "var k8sBinDir = flag.String(\"k8s-bin-dir\", \"\", \"Directory containing k8s kubelet and kube-apiserver binaries.\") var buildTargets = []string{ \"cmd/kubelet\", \"cmd/kube-apiserver\", \"test/e2e_node/e2e_node.test\", } func buildGo() { glog.Infof(\"Building k8s binaries...\") k8sRoot, err := getK8sRootDir() if err != nil { glog.Fatalf(\"Failed to locate kubernetes root directory %v.\", err) } cmd := exec.Command(filepath.Join(k8sRoot, \"hack/build-go.sh\"), buildTargets...) cmd := exec.Command(filepath.Join(k8sRoot, \"hack/build-go.sh\")) cmd.Stdout = os.Stdout cmd.Stderr = os.Stderr err = cmd.Run()", "commid": "kubernetes_pr_27108"}], "negative_passages": []} {"query_id": "q-en-kubernetes-7f95224274155bb39af2e05742451e79cefc1f176d05622797c3ba56ecf46808", "query": "is failing with: Examples:\nThis is the failing line of code...\nPASS: FAIL: Seems like this disappeared: Also this is flipped: we run the following\nmy leading theory is this is something in\nThe issue is that the above PR changed a bunch of to be and thus we no longer wait for getArchive() to complete before starting tests. And unfortunately the previous PR cannot be reverted\nAfter merging, kubelet-gce-e2e-ci still failed with the same issue. reopen the issue. Currently the failure blocked the merge-bot.\ndid stop the bleeding. Now kubelet-gce-e2e-ci is green: and merge-bot is unblocked.\nI think the \"go arc\" piece was a red herring. I suspect the issue is that the same PR tries to copy the binary into the archive directory, but PR stops building ginkgo, so the copy fails. That is why this line then fails...", "positive_passages": [{"docid": "doc-en-kubernetes-92743c6358e0bad6ac9d7d238a92ee22571b88ffaefaedc04c71c8086c6a496e", "text": "return \"\", err } if _, err := os.Stat(filepath.Join(*k8sBinDir, bin)); err != nil { return \"\", fmt.Errorf(\"Could not find %s under directory %s.\", bin, absPath) return \"\", fmt.Errorf(\"Could not find kube-apiserver under directory %s.\", absPath) } return filepath.Join(absPath, bin), nil }", "commid": "kubernetes_pr_27108"}], "negative_passages": []} {"query_id": "q-en-kubernetes-137e73901e94fc53b23ce122acea4f6c63caaffb221f4c8fdcf70adeb68da5f7", "query": "When replacing an existing pod, the command fails as it tries to create a pod with a name that is still used by terminating pod. How does the option works? deletes a pod (e.g. mypod) creates a new pod with the same name (mypod) What happens? As the command tries to recreate the pod, it fails as the is in state. Thus it can not be created. The problem is the does not wait for a pod to be deleted. Reproducer: run mypod --image=yapei/hello-openshift --generator=run-pod/v1 get pod mypod -o yaml replace -f --force The last command ends with:\noption introduce by\nWith set to positive value, what should be the default behaviour of the command? When set to , should the command hang for and then create the pod? Or is there a way to create a callback (inside a scheduler?) that creates the pod once the current one is deleted?\nYou could watch for the resource to be deleted and create after you see the deletion happen\nhave you ever tried to set , I think this will do a non graceful deletion. should make sure the old resource is deleted before it creates a new resource, if not, that's a bug.\nThough the help says the is to be used only with , it returns the error above.\nThen I think there are two issues we need to track: 1. option should be ignored or just returns error if users are not using option. 2. should always make sure the old resource has been deleted before create the new resource.", "positive_passages": [{"docid": "doc-en-kubernetes-dab75e57b1154a44c86579560ea00a1304ab777c7f22cf03803fba62c4e37550", "text": "kubectl replace \"${kube_flags[@]}\" --force -f /tmp/tmp-valid-pod.json # Post-condition: spec.container.name = \"replaced-k8s-serve-hostname\" kube::test::get_object_assert 'pod valid-pod' \"{{(index .spec.containers 0).name}}\" 'replaced-k8s-serve-hostname' ## check replace --grace-period requires --force output_message=$(! kubectl replace \"${kube_flags[@]}\" --grace-period=1 -f /tmp/tmp-valid-pod.json 2>&1) kube::test::if_has_string \"${output_message}\" '--grace-period must have --force specified' ## check replace --timeout requires --force output_message=$(! kubectl replace \"${kube_flags[@]}\" --timeout=1s -f /tmp/tmp-valid-pod.json 2>&1) kube::test::if_has_string \"${output_message}\" '--timeout must have --force specified' #cleaning rm /tmp/tmp-valid-pod.json", "commid": "kubernetes_pr_31841"}], "negative_passages": []} {"query_id": "q-en-kubernetes-137e73901e94fc53b23ce122acea4f6c63caaffb221f4c8fdcf70adeb68da5f7", "query": "When replacing an existing pod, the command fails as it tries to create a pod with a name that is still used by terminating pod. How does the option works? deletes a pod (e.g. mypod) creates a new pod with the same name (mypod) What happens? As the command tries to recreate the pod, it fails as the is in state. Thus it can not be created. The problem is the does not wait for a pod to be deleted. Reproducer: run mypod --image=yapei/hello-openshift --generator=run-pod/v1 get pod mypod -o yaml replace -f --force The last command ends with:\noption introduce by\nWith set to positive value, what should be the default behaviour of the command? When set to , should the command hang for and then create the pod? Or is there a way to create a callback (inside a scheduler?) that creates the pod once the current one is deleted?\nYou could watch for the resource to be deleted and create after you see the deletion happen\nhave you ever tried to set , I think this will do a non graceful deletion. should make sure the old resource is deleted before it creates a new resource, if not, that's a bug.\nThough the help says the is to be used only with , it returns the error above.\nThen I think there are two issues we need to track: 1. option should be ignored or just returns error if users are not using option. 2. should always make sure the old resource has been deleted before create the new resource.", "positive_passages": [{"docid": "doc-en-kubernetes-b8ab92bba877a2e277509e29b9f50eba1c508735654361e9f650e8fd7bd542b1", "text": "\"github.com/spf13/cobra\" \"github.com/golang/glog\" \"k8s.io/kubernetes/pkg/api/errors\" \"k8s.io/kubernetes/pkg/kubectl\" cmdutil \"k8s.io/kubernetes/pkg/kubectl/cmd/util\" \"k8s.io/kubernetes/pkg/kubectl/resource\" \"k8s.io/kubernetes/pkg/runtime\" \"k8s.io/kubernetes/pkg/util/wait\" ) // ReplaceOptions is the start of the data required to perform the operation. As new fields are added, add them here instead of", "commid": "kubernetes_pr_31841"}], "negative_passages": []} {"query_id": "q-en-kubernetes-137e73901e94fc53b23ce122acea4f6c63caaffb221f4c8fdcf70adeb68da5f7", "query": "When replacing an existing pod, the command fails as it tries to create a pod with a name that is still used by terminating pod. How does the option works? deletes a pod (e.g. mypod) creates a new pod with the same name (mypod) What happens? As the command tries to recreate the pod, it fails as the is in state. Thus it can not be created. The problem is the does not wait for a pod to be deleted. Reproducer: run mypod --image=yapei/hello-openshift --generator=run-pod/v1 get pod mypod -o yaml replace -f --force The last command ends with:\noption introduce by\nWith set to positive value, what should be the default behaviour of the command? When set to , should the command hang for and then create the pod? Or is there a way to create a callback (inside a scheduler?) that creates the pod once the current one is deleted?\nYou could watch for the resource to be deleted and create after you see the deletion happen\nhave you ever tried to set , I think this will do a non graceful deletion. should make sure the old resource is deleted before it creates a new resource, if not, that's a bug.\nThough the help says the is to be used only with , it returns the error above.\nThen I think there are two issues we need to track: 1. option should be ignored or just returns error if users are not using option. 2. should always make sure the old resource has been deleted before create the new resource.", "positive_passages": [{"docid": "doc-en-kubernetes-532d88d15463bb362b69ff6d72f0a2a13d139e5b4795fea68bb8c08a78cea5dc", "text": "return forceReplace(f, out, cmd, args, shortOutput, options) } if cmdutil.GetFlagInt(cmd, \"grace-period\") >= 0 { return fmt.Errorf(\"--grace-period must have --force specified\") } if cmdutil.GetFlagDuration(cmd, \"timeout\") != 0 { return fmt.Errorf(\"--timeout must have --force specified\") } mapper, typer := f.Object(cmdutil.GetIncludeThirdPartyAPIs(cmd)) r := resource.NewBuilder(mapper, typer, resource.ClientMapperFunc(f.ClientForMapping), f.Decoder(true)). Schema(schema).", "commid": "kubernetes_pr_31841"}], "negative_passages": []} {"query_id": "q-en-kubernetes-137e73901e94fc53b23ce122acea4f6c63caaffb221f4c8fdcf70adeb68da5f7", "query": "When replacing an existing pod, the command fails as it tries to create a pod with a name that is still used by terminating pod. How does the option works? deletes a pod (e.g. mypod) creates a new pod with the same name (mypod) What happens? As the command tries to recreate the pod, it fails as the is in state. Thus it can not be created. The problem is the does not wait for a pod to be deleted. Reproducer: run mypod --image=yapei/hello-openshift --generator=run-pod/v1 get pod mypod -o yaml replace -f --force The last command ends with:\noption introduce by\nWith set to positive value, what should be the default behaviour of the command? When set to , should the command hang for and then create the pod? Or is there a way to create a callback (inside a scheduler?) that creates the pod once the current one is deleted?\nYou could watch for the resource to be deleted and create after you see the deletion happen\nhave you ever tried to set , I think this will do a non graceful deletion. should make sure the old resource is deleted before it creates a new resource, if not, that's a bug.\nThough the help says the is to be used only with , it returns the error above.\nThen I think there are two issues we need to track: 1. option should be ignored or just returns error if users are not using option. 2. should always make sure the old resource has been deleted before create the new resource.", "positive_passages": [{"docid": "doc-en-kubernetes-fc3384fd8a13d7e0f644848a247d424d80fef3ecf578423d688de74fb533c9c0", "text": "} //Replace will create a resource if it doesn't exist already, so ignore not found error ignoreNotFound := true timeout := cmdutil.GetFlagDuration(cmd, \"timeout\") // By default use a reaper to delete all related resources. if cmdutil.GetFlagBool(cmd, \"cascade\") { glog.Warningf(\"\"cascade\" is set, kubectl will delete and re-create all resources managed by this resource (e.g. Pods created by a ReplicationController). Consider using \"kubectl rolling-update\" if you want to update a ReplicationController together with its Pods.\") err = ReapResult(r, f, out, cmdutil.GetFlagBool(cmd, \"cascade\"), ignoreNotFound, cmdutil.GetFlagDuration(cmd, \"timeout\"), cmdutil.GetFlagInt(cmd, \"grace-period\"), shortOutput, mapper, false) err = ReapResult(r, f, out, cmdutil.GetFlagBool(cmd, \"cascade\"), ignoreNotFound, timeout, cmdutil.GetFlagInt(cmd, \"grace-period\"), shortOutput, mapper, false) } else { err = DeleteResult(r, out, ignoreNotFound, shortOutput, mapper) }", "commid": "kubernetes_pr_31841"}], "negative_passages": []} {"query_id": "q-en-kubernetes-137e73901e94fc53b23ce122acea4f6c63caaffb221f4c8fdcf70adeb68da5f7", "query": "When replacing an existing pod, the command fails as it tries to create a pod with a name that is still used by terminating pod. How does the option works? deletes a pod (e.g. mypod) creates a new pod with the same name (mypod) What happens? As the command tries to recreate the pod, it fails as the is in state. Thus it can not be created. The problem is the does not wait for a pod to be deleted. Reproducer: run mypod --image=yapei/hello-openshift --generator=run-pod/v1 get pod mypod -o yaml replace -f --force The last command ends with:\noption introduce by\nWith set to positive value, what should be the default behaviour of the command? When set to , should the command hang for and then create the pod? Or is there a way to create a callback (inside a scheduler?) that creates the pod once the current one is deleted?\nYou could watch for the resource to be deleted and create after you see the deletion happen\nhave you ever tried to set , I think this will do a non graceful deletion. should make sure the old resource is deleted before it creates a new resource, if not, that's a bug.\nThough the help says the is to be used only with , it returns the error above.\nThen I think there are two issues we need to track: 1. option should be ignored or just returns error if users are not using option. 2. should always make sure the old resource has been deleted before create the new resource.", "positive_passages": [{"docid": "doc-en-kubernetes-24e12552e7d4bc11a4bae587cf87efb5ebd98d10e90aacd6e973fec59f38aece", "text": "return err } if timeout == 0 { timeout = kubectl.Timeout } r.Visit(func(info *resource.Info, err error) error { if err != nil { return err } return wait.PollImmediate(kubectl.Interval, timeout, func() (bool, error) { if err := info.Get(); !errors.IsNotFound(err) { return false, err } return true, nil }) }) r = resource.NewBuilder(mapper, typer, resource.ClientMapperFunc(f.UnstructuredClientForMapping), runtime.UnstructuredJSONScheme). Schema(schema). ContinueOnError().", "commid": "kubernetes_pr_31841"}], "negative_passages": []} {"query_id": "q-en-kubernetes-137e73901e94fc53b23ce122acea4f6c63caaffb221f4c8fdcf70adeb68da5f7", "query": "When replacing an existing pod, the command fails as it tries to create a pod with a name that is still used by terminating pod. How does the option works? deletes a pod (e.g. mypod) creates a new pod with the same name (mypod) What happens? As the command tries to recreate the pod, it fails as the is in state. Thus it can not be created. The problem is the does not wait for a pod to be deleted. Reproducer: run mypod --image=yapei/hello-openshift --generator=run-pod/v1 get pod mypod -o yaml replace -f --force The last command ends with:\noption introduce by\nWith set to positive value, what should be the default behaviour of the command? When set to , should the command hang for and then create the pod? Or is there a way to create a callback (inside a scheduler?) that creates the pod once the current one is deleted?\nYou could watch for the resource to be deleted and create after you see the deletion happen\nhave you ever tried to set , I think this will do a non graceful deletion. should make sure the old resource is deleted before it creates a new resource, if not, that's a bug.\nThough the help says the is to be used only with , it returns the error above.\nThen I think there are two issues we need to track: 1. option should be ignored or just returns error if users are not using option. 2. should always make sure the old resource has been deleted before create the new resource.", "positive_passages": [{"docid": "doc-en-kubernetes-08fd3593414206bf9927f1077b4ff16d2bca233d5963e85ca60d73830883ca48", "text": "f, tf, codec, _ := NewAPIFactory() ns := dynamic.ContentConfig().NegotiatedSerializer tf.Printer = &testPrinter{} deleted := false tf.Client = &fake.RESTClient{ NegotiatedSerializer: ns, Client: fake.CreateHTTPClient(func(req *http.Request) (*http.Response, error) { switch p, m := req.URL.Path, req.Method; { case p == \"/namespaces/test/replicationcontrollers/redis-master\" && (m == http.MethodGet || m == http.MethodPut || m == http.MethodDelete): case p == \"/namespaces/test/replicationcontrollers/redis-master\" && m == http.MethodDelete: deleted = true fallthrough case p == \"/namespaces/test/replicationcontrollers/redis-master\" && m == http.MethodPut: return &http.Response{StatusCode: http.StatusOK, Header: defaultHeader(), Body: objBody(codec, &rc.Items[0])}, nil case p == \"/namespaces/test/replicationcontrollers/redis-master\" && m == http.MethodGet: statusCode := http.StatusOK if deleted { statusCode = http.StatusNotFound } return &http.Response{StatusCode: statusCode, Header: defaultHeader(), Body: objBody(codec, &rc.Items[0])}, nil case p == \"/namespaces/test/replicationcontrollers\" && m == http.MethodPost: return &http.Response{StatusCode: http.StatusCreated, Header: defaultHeader(), Body: objBody(codec, &rc.Items[0])}, nil default:", "commid": "kubernetes_pr_31841"}], "negative_passages": []} {"query_id": "q-en-kubernetes-137e73901e94fc53b23ce122acea4f6c63caaffb221f4c8fdcf70adeb68da5f7", "query": "When replacing an existing pod, the command fails as it tries to create a pod with a name that is still used by terminating pod. How does the option works? deletes a pod (e.g. mypod) creates a new pod with the same name (mypod) What happens? As the command tries to recreate the pod, it fails as the is in state. Thus it can not be created. The problem is the does not wait for a pod to be deleted. Reproducer: run mypod --image=yapei/hello-openshift --generator=run-pod/v1 get pod mypod -o yaml replace -f --force The last command ends with:\noption introduce by\nWith set to positive value, what should be the default behaviour of the command? When set to , should the command hang for and then create the pod? Or is there a way to create a callback (inside a scheduler?) that creates the pod once the current one is deleted?\nYou could watch for the resource to be deleted and create after you see the deletion happen\nhave you ever tried to set , I think this will do a non graceful deletion. should make sure the old resource is deleted before it creates a new resource, if not, that's a bug.\nThough the help says the is to be used only with , it returns the error above.\nThen I think there are two issues we need to track: 1. option should be ignored or just returns error if users are not using option. 2. should always make sure the old resource has been deleted before create the new resource.", "positive_passages": [{"docid": "doc-en-kubernetes-9ce6eebd0a6751e9f10424517d6511cd07cf2ee4f8d08dab07f363cf779479f4", "text": "f, tf, codec, _ := NewAPIFactory() ns := dynamic.ContentConfig().NegotiatedSerializer tf.Printer = &testPrinter{} redisMasterDeleted := false frontendDeleted := false tf.Client = &fake.RESTClient{ NegotiatedSerializer: ns, Client: fake.CreateHTTPClient(func(req *http.Request) (*http.Response, error) { switch p, m := req.URL.Path, req.Method; { case p == \"/namespaces/test/replicationcontrollers/redis-master\" && (m == http.MethodGet || m == http.MethodPut || m == http.MethodDelete): case p == \"/namespaces/test/replicationcontrollers/redis-master\" && m == http.MethodDelete: redisMasterDeleted = true fallthrough case p == \"/namespaces/test/replicationcontrollers/redis-master\" && m == http.MethodPut: return &http.Response{StatusCode: http.StatusOK, Header: defaultHeader(), Body: objBody(codec, &rc.Items[0])}, nil case p == \"/namespaces/test/replicationcontrollers/redis-master\" && m == http.MethodGet: statusCode := http.StatusOK if redisMasterDeleted { statusCode = http.StatusNotFound } return &http.Response{StatusCode: statusCode, Header: defaultHeader(), Body: objBody(codec, &rc.Items[0])}, nil case p == \"/namespaces/test/replicationcontrollers\" && m == http.MethodPost: return &http.Response{StatusCode: http.StatusCreated, Header: defaultHeader(), Body: objBody(codec, &rc.Items[0])}, nil case p == \"/namespaces/test/services/frontend\" && (m == http.MethodGet || m == http.MethodPut || m == http.MethodDelete): case p == \"/namespaces/test/services/frontend\" && m == http.MethodDelete: frontendDeleted = true fallthrough case p == \"/namespaces/test/services/frontend\" && m == http.MethodPut: return &http.Response{StatusCode: http.StatusOK, Header: defaultHeader(), Body: objBody(codec, &svc.Items[0])}, nil case p == \"/namespaces/test/services/frontend\" && m == http.MethodGet: statusCode := http.StatusOK if frontendDeleted { statusCode = http.StatusNotFound } return &http.Response{StatusCode: statusCode, Header: defaultHeader(), Body: objBody(codec, &svc.Items[0])}, nil case p == \"/namespaces/test/services\" && m == http.MethodPost: return &http.Response{StatusCode: http.StatusCreated, Header: defaultHeader(), Body: objBody(codec, &svc.Items[0])}, nil default:", "commid": "kubernetes_pr_31841"}], "negative_passages": []} {"query_id": "q-en-kubernetes-137e73901e94fc53b23ce122acea4f6c63caaffb221f4c8fdcf70adeb68da5f7", "query": "When replacing an existing pod, the command fails as it tries to create a pod with a name that is still used by terminating pod. How does the option works? deletes a pod (e.g. mypod) creates a new pod with the same name (mypod) What happens? As the command tries to recreate the pod, it fails as the is in state. Thus it can not be created. The problem is the does not wait for a pod to be deleted. Reproducer: run mypod --image=yapei/hello-openshift --generator=run-pod/v1 get pod mypod -o yaml replace -f --force The last command ends with:\noption introduce by\nWith set to positive value, what should be the default behaviour of the command? When set to , should the command hang for and then create the pod? Or is there a way to create a callback (inside a scheduler?) that creates the pod once the current one is deleted?\nYou could watch for the resource to be deleted and create after you see the deletion happen\nhave you ever tried to set , I think this will do a non graceful deletion. should make sure the old resource is deleted before it creates a new resource, if not, that's a bug.\nThough the help says the is to be used only with , it returns the error above.\nThen I think there are two issues we need to track: 1. option should be ignored or just returns error if users are not using option. 2. should always make sure the old resource has been deleted before create the new resource.", "positive_passages": [{"docid": "doc-en-kubernetes-fca4dad94514e5fc4023681bd6f559e7580d9a775d21b7fc0364bdcf0d05bb9b", "text": "f, tf, codec, _ := NewAPIFactory() ns := dynamic.ContentConfig().NegotiatedSerializer tf.Printer = &testPrinter{} created := map[string]bool{} tf.Client = &fake.RESTClient{ NegotiatedSerializer: ns, Client: fake.CreateHTTPClient(func(req *http.Request) (*http.Response, error) { switch p, m := req.URL.Path, req.Method; { case strings.HasPrefix(p, \"/namespaces/test/replicationcontrollers/\") && (m == http.MethodGet || m == http.MethodPut || m == http.MethodDelete): case strings.HasPrefix(p, \"/namespaces/test/replicationcontrollers/\") && m == http.MethodPut: created[p] = true return &http.Response{StatusCode: http.StatusOK, Header: defaultHeader(), Body: objBody(codec, &rc.Items[0])}, nil case strings.HasPrefix(p, \"/namespaces/test/replicationcontrollers/\") && m == http.MethodGet: statusCode := http.StatusNotFound if created[p] { statusCode = http.StatusOK } return &http.Response{StatusCode: statusCode, Header: defaultHeader(), Body: objBody(codec, &rc.Items[0])}, nil case strings.HasPrefix(p, \"/namespaces/test/replicationcontrollers/\") && m == http.MethodDelete: delete(created, p) return &http.Response{StatusCode: http.StatusOK, Header: defaultHeader(), Body: objBody(codec, &rc.Items[0])}, nil case strings.HasPrefix(p, \"/namespaces/test/replicationcontrollers\") && m == http.MethodPost: return &http.Response{StatusCode: http.StatusCreated, Header: defaultHeader(), Body: objBody(codec, &rc.Items[0])}, nil", "commid": "kubernetes_pr_31841"}], "negative_passages": []} {"query_id": "q-en-kubernetes-137e73901e94fc53b23ce122acea4f6c63caaffb221f4c8fdcf70adeb68da5f7", "query": "When replacing an existing pod, the command fails as it tries to create a pod with a name that is still used by terminating pod. How does the option works? deletes a pod (e.g. mypod) creates a new pod with the same name (mypod) What happens? As the command tries to recreate the pod, it fails as the is in state. Thus it can not be created. The problem is the does not wait for a pod to be deleted. Reproducer: run mypod --image=yapei/hello-openshift --generator=run-pod/v1 get pod mypod -o yaml replace -f --force The last command ends with:\noption introduce by\nWith set to positive value, what should be the default behaviour of the command? When set to , should the command hang for and then create the pod? Or is there a way to create a callback (inside a scheduler?) that creates the pod once the current one is deleted?\nYou could watch for the resource to be deleted and create after you see the deletion happen\nhave you ever tried to set , I think this will do a non graceful deletion. should make sure the old resource is deleted before it creates a new resource, if not, that's a bug.\nThough the help says the is to be used only with , it returns the error above.\nThen I think there are two issues we need to track: 1. option should be ignored or just returns error if users are not using option. 2. should always make sure the old resource has been deleted before create the new resource.", "positive_passages": [{"docid": "doc-en-kubernetes-7465e2e443409b35723bbed5e985b218c2aa9b654177a4817dce96b91d9781b9", "text": "NegotiatedSerializer: ns, Client: fake.CreateHTTPClient(func(req *http.Request) (*http.Response, error) { switch p, m := req.URL.Path, req.Method; { case p == \"/namespaces/test/replicationcontrollers/redis-master\" && m == http.MethodDelete: case p == \"/namespaces/test/replicationcontrollers/redis-master\" && (m == http.MethodGet || m == http.MethodDelete): return &http.Response{StatusCode: http.StatusNotFound, Header: defaultHeader(), Body: stringBody(\"\")}, nil case p == \"/namespaces/test/replicationcontrollers\" && m == http.MethodPost: return &http.Response{StatusCode: http.StatusCreated, Header: defaultHeader(), Body: objBody(codec, &rc.Items[0])}, nil", "commid": "kubernetes_pr_31841"}], "negative_passages": []} {"query_id": "q-en-kubernetes-5d760c5ea5685578ebb9830d3d5528d6365ffb387067e4c2605824d06bca490e", "query": "As mentioned in , the federation etcd will lose its state any time the pod dies, unless we modify to add a PVC. We'll also need to create a PV and modify the turnup script to create the backing volume in the cloud provider.\nFYI I believe this is a showstopper.\nOK. implied I could use to get this provisioned on-demand. AFAICT, , and since it makes our life easier I am trying to get this to work in my test cluster, but so far no joy. Any obvious mistake?\nI needed to not specify a .\nwhew! , Matt Liggett wrote:", "positive_passages": [{"docid": "doc-en-kubernetes-87483e40644167a1658ef107f45d7914b2ad8c0af8314d640ad4742b4fb7c835", "text": "$host_kubectl create secret generic ${name} --from-file=\"${dir}/kubeconfig\" --namespace=\"${FEDERATION_NAMESPACE}\" done $template \"${manifests_root}/federation-apiserver-\"{deployment,secrets}\".yaml\" | $host_kubectl create -f - $template \"${manifests_root}/federation-controller-manager-deployment.yaml\" | $host_kubectl create -f - for file in federation-etcd-pvc.yaml federation-apiserver-{deployment,secrets}.yaml federation-controller-manager-deployment.yaml; do $template \"${manifests_root}/${file}\" | $host_kubectl create -f - done # Update the users kubeconfig to include federation-apiserver credentials. CONTEXT=federation-cluster ", "commid": "kubernetes_pr_28261"}], "negative_passages": []} {"query_id": "q-en-kubernetes-5d760c5ea5685578ebb9830d3d5528d6365ffb387067e4c2605824d06bca490e", "query": "As mentioned in , the federation etcd will lose its state any time the pod dies, unless we modify to add a PVC. We'll also need to create a PV and modify the turnup script to create the backing volume in the cloud provider.\nFYI I believe this is a showstopper.\nOK. implied I could use to get this provisioned on-demand. AFAICT, , and since it makes our life easier I am trying to get this to work in my test cluster, but so far no joy. Any obvious mistake?\nI needed to not specify a .\nwhew! , Matt Liggett wrote:", "positive_passages": [{"docid": "doc-en-kubernetes-b82dd4044c6d2fe920fd4e2df0d5b7dd75d385481a6dc0a7f042aadfc815ab17", "text": "readOnly: true - name: etcd image: quay.io/coreos/etcd:v2.3.3 command: - /etcd - --data-dir - /var/etcd/data volumeMounts: - mountPath: /var/etcd name: varetcd volumes: - name: federation-apiserver-secrets secret: secretName: federation-apiserver-secrets - name: varetcd persistentVolumeClaim: claimName: {{.FEDERATION_APISERVER_DEPLOYMENT_NAME}}-etcd-claim ", "commid": "kubernetes_pr_28261"}], "negative_passages": []} {"query_id": "q-en-kubernetes-5d760c5ea5685578ebb9830d3d5528d6365ffb387067e4c2605824d06bca490e", "query": "As mentioned in , the federation etcd will lose its state any time the pod dies, unless we modify to add a PVC. We'll also need to create a PV and modify the turnup script to create the backing volume in the cloud provider.\nFYI I believe this is a showstopper.\nOK. implied I could use to get this provisioned on-demand. AFAICT, , and since it makes our life easier I am trying to get this to work in my test cluster, but so far no joy. Any obvious mistake?\nI needed to not specify a .\nwhew! , Matt Liggett wrote:", "positive_passages": [{"docid": "doc-en-kubernetes-91dea5dc515e3fa361024740b93baf9c4f3ad1480272f6c7073bca63016f08f0", "text": " apiVersion: v1 kind: PersistentVolumeClaim metadata: name: {{.FEDERATION_APISERVER_DEPLOYMENT_NAME}}-etcd-claim annotations: volume.alpha.kubernetes.io/storage-class: \"yes\" namespace: {{.FEDERATION_NAMESPACE}} labels: app: federated-cluster spec: accessModes: - ReadWriteOnce resources: requests: storage: 10Gi ", "commid": "kubernetes_pr_28261"}], "negative_passages": []} {"query_id": "q-en-kubernetes-54ee9a53c145bcb7f1ea4e17c59ef3f3aaba49d2a04dbd5ca8ea969af1a94c94", "query": "https://k8s- Failed: [] Container Runtime Conformance Test container runtime conformance blackbox test when running a container with a new image should not be able to pull from private registry without secret {E2eNode Suite}\nSame with , the fix is under review\nhttps://k8s- Failed: [] Container Runtime Conformance Test container runtime conformance blackbox test when running a container with a new image should not be able to pull from private registry without secret {E2eNode Suite}\nhttps://k8s- Failed: [] Container Runtime Conformance Test container runtime conformance blackbox test when running a container with a new image should not be able to pull from private registry without secret {E2eNode Suite}\nThe fix has been merged , close this one. Feel free to reopen if this happens again.", "positive_passages": [{"docid": "doc-en-kubernetes-e38dd6a6e96e736291bda9d7780e969a109cfe99d69c22cccfe74c96318796e7", "text": "Expect(container.Create()).To(Succeed()) defer container.Delete() By(\"check the pod phase\") Eventually(container.GetPhase, retryTimeout, pollInterval).Should(Equal(testCase.phase)) Consistently(container.GetPhase, consistentCheckTimeout, pollInterval).Should(Equal(testCase.phase)) // We need to check container state first. The default pod status is pending, If we check // pod phase first, and the expected pod phase is Pending, the container status may not // even show up when we check it. By(\"check the container state\") status, err := container.GetStatus() Expect(err).NotTo(HaveOccurred()) Expect(GetContainerState(status.State)).To(Equal(testCase.state)) getState := func() (ContainerState, error) { status, err := container.GetStatus() if err != nil { return ContainerStateUnknown, err } return GetContainerState(status.State), nil } Eventually(getState, retryTimeout, pollInterval).Should(Equal(testCase.state)) Consistently(getState, consistentCheckTimeout, pollInterval).Should(Equal(testCase.state)) By(\"check the pod phase\") Expect(container.GetPhase()).To(Equal(testCase.phase)) By(\"it should be possible to delete\") Expect(container.Delete()).To(Succeed())", "commid": "kubernetes_pr_28323"}], "negative_passages": []} {"query_id": "q-en-kubernetes-3c64df687e12e254812ed26f14838f867cca01f1b7d5e271e2d52376c8409ae4", "query": "Either include non-arch specific stuff in every tar or break that out too. ref cc\ninterested in helping with this one?\nYes, I may help\nOK, I did not actually understand what was in , apparently. Alternate proposal: [ ] update all references that need just server binaries to download the server tarball directly [ ] same for places that just need (related to ) [ ] remove all arch-specific binary artifacts from and replace with a script which downloads the client/server tarballs as necessary. The alternative option, including all of the non-arch-specific code and docs in an arch-specific starts to get very tricky, since we run into a large cross product of client x server platforms/archs.", "positive_passages": [{"docid": "doc-en-kubernetes-675824e77431d37f2d4aaf04c6b142fd6ed6fa2751ac6658ff0ee368d3b24adc", "text": "kube::release::package_kube_manifests_tarball & kube::util::wait-for-jobs || { kube::log::error \"previous tarball phase failed\"; return 1; } kube::release::package_full_tarball & # _full depends on all the previous phases kube::release::package_final_tarball & # _final depends on some of the previous phases kube::release::package_test_tarball & # _test doesn't depend on anything kube::util::wait-for-jobs || { kube::log::error \"previous tarball phase failed\"; return 1; } }", "commid": "kubernetes_pr_35737"}], "negative_passages": []} {"query_id": "q-en-kubernetes-3c64df687e12e254812ed26f14838f867cca01f1b7d5e271e2d52376c8409ae4", "query": "Either include non-arch specific stuff in every tar or break that out too. ref cc\ninterested in helping with this one?\nYes, I may help\nOK, I did not actually understand what was in , apparently. Alternate proposal: [ ] update all references that need just server binaries to download the server tarball directly [ ] same for places that just need (related to ) [ ] remove all arch-specific binary artifacts from and replace with a script which downloads the client/server tarballs as necessary. The alternative option, including all of the non-arch-specific code and docs in an arch-specific starts to get very tricky, since we run into a large cross product of client x server platforms/archs.", "positive_passages": [{"docid": "doc-en-kubernetes-1274bdfc7e0b08f1ac0effa73c36d9c3e4c0d18adc2061de0a80013af14e606a", "text": "kube::release::create_tarball \"${package_name}\" \"${release_stage}/..\" } # This is all the stuff you need to run/install kubernetes. This includes: # - precompiled binaries for client # This is all the platform-independent stuff you need to run/install kubernetes. # Arch-specific binaries will need to be downloaded separately (possibly by # using the bundled cluster/get-kube-binaries.sh script). # Included in this tarball: # - Cluster spin up/down scripts and configs for various cloud providers # - tarballs for server binary and salt configs that are ready to be uploaded # - Tarballs for salt configs that are ready to be uploaded # to master by whatever means appropriate. function kube::release::package_full_tarball() { kube::log::status \"Building tarball: full\" # - Examples (which may or may not still work) # - The remnants of the docs/ directory function kube::release::package_final_tarball() { kube::log::status \"Building tarball: final\" # This isn't a \"full\" tarball anymore, but the release lib still expects # artifacts under \"full/kubernetes/\" local release_stage=\"${RELEASE_STAGE}/full/kubernetes\" rm -rf \"${release_stage}\" mkdir -p \"${release_stage}\" # Copy all of the client binaries in here, but not test or server binaries. # The server binaries are included with the server binary tarball. local platform for platform in \"${KUBE_CLIENT_PLATFORMS[@]}\"; do local client_bins=(\"${KUBE_CLIENT_BINARIES[@]}\") if [[ \"${platform%/*}\" == \"windows\" ]]; then client_bins=(\"${KUBE_CLIENT_BINARIES_WIN[@]}\") fi mkdir -p \"${release_stage}/platforms/${platform}\" cp \"${client_bins[@]/#/${LOCAL_OUTPUT_BINPATH}/${platform}/}\" \"${release_stage}/platforms/${platform}\" done mkdir -p \"${release_stage}/client\" cat < \"${release_stage}/client/README\" Client binaries are no longer included in the Kubernetes final tarball. Run cluster/get-kube-binaries.sh to download client and server binaries. EOF # We want everything in /cluster except saltbase. That is only needed on the # server.", "commid": "kubernetes_pr_35737"}], "negative_passages": []} {"query_id": "q-en-kubernetes-3c64df687e12e254812ed26f14838f867cca01f1b7d5e271e2d52376c8409ae4", "query": "Either include non-arch specific stuff in every tar or break that out too. ref cc\ninterested in helping with this one?\nYes, I may help\nOK, I did not actually understand what was in , apparently. Alternate proposal: [ ] update all references that need just server binaries to download the server tarball directly [ ] same for places that just need (related to ) [ ] remove all arch-specific binary artifacts from and replace with a script which downloads the client/server tarballs as necessary. The alternative option, including all of the non-arch-specific code and docs in an arch-specific starts to get very tricky, since we run into a large cross product of client x server platforms/archs.", "positive_passages": [{"docid": "doc-en-kubernetes-3170fb7f02dc7755b373285863ac00ddaa47c08487393e642e3a3dca95eb7593", "text": "mkdir -p \"${release_stage}/server\" cp \"${RELEASE_DIR}/kubernetes-salt.tar.gz\" \"${release_stage}/server/\" cp \"${RELEASE_DIR}\"/kubernetes-server-*.tar.gz \"${release_stage}/server/\" cp \"${RELEASE_DIR}/kubernetes-manifests.tar.gz\" \"${release_stage}/server/\" cat < \"${release_stage}/server/README\" Server binary tarballs are no longer included in the Kubernetes final tarball. Run cluster/get-kube-binaries.sh to download client and server binaries. EOF mkdir -p \"${release_stage}/third_party\" cp -R \"${KUBE_ROOT}/third_party/htpasswd\" \"${release_stage}/third_party/htpasswd\"", "commid": "kubernetes_pr_35737"}], "negative_passages": []} {"query_id": "q-en-kubernetes-749738408e786a27678c468a5520c46b775b406c0396b4fc77d0584fc060b72d", "query": "The code in doesn't work for me: does not pass on SIGTERM when the caller is in the same process group. Then the knocks out the process without giving it a chance to kill its child. The bit doesn't help: either the process has already been killed as above, or it gets killed now with SIGKILL; either way the process is still running. Oddly, it does work about one time in ten. I have no explanation for that. I am running from the command-line like this: , on 0-41-generic -Ubuntu SMP Fri Jun 24 11:28:43 UTC 2016 x8664 x8664 x86_64 GNU/Linux CC\nThe manual says \"This is the signal that the calling process will get when its parent dies.\"; it is applied to the process, so that will get killed if the process dies. That isn't what we are looking for:", "positive_passages": [{"docid": "doc-en-kubernetes-192782e70865eb918e6c7a61365c424de8a811f00bc804547686eb681701b61b", "text": "\"os/exec\" \"path\" \"path/filepath\" \"reflect\" \"strconv\" \"strings\" \"syscall\"", "commid": "kubernetes_pr_29380"}], "negative_passages": []} {"query_id": "q-en-kubernetes-749738408e786a27678c468a5520c46b775b406c0396b4fc77d0584fc060b72d", "query": "The code in doesn't work for me: does not pass on SIGTERM when the caller is in the same process group. Then the knocks out the process without giving it a chance to kill its child. The bit doesn't help: either the process has already been killed as above, or it gets killed now with SIGKILL; either way the process is still running. Oddly, it does work about one time in ten. I have no explanation for that. I am running from the command-line like this: , on 0-41-generic -Ubuntu SMP Fri Jun 24 11:28:43 UTC 2016 x8664 x8664 x86_64 GNU/Linux CC\nThe manual says \"This is the signal that the calling process will get when its parent dies.\"; it is applied to the process, so that will get killed if the process dies. That isn't what we are looking for:", "positive_passages": [{"docid": "doc-en-kubernetes-e29eb9f5b45e979e922a5ceacfcaae023c569a02eadb22832c4bb9888167d1c6", "text": "cmd.Cmd.Stdout = outfile cmd.Cmd.Stderr = outfile // Killing the sudo command should kill the server as well. attrs := &syscall.SysProcAttr{} // Hack to set linux-only field without build tags. deathSigField := reflect.ValueOf(attrs).Elem().FieldByName(\"Pdeathsig\") if deathSigField.IsValid() { deathSigField.Set(reflect.ValueOf(syscall.SIGKILL)) } else { cmdErrorChan <- fmt.Errorf(\"Failed to set Pdeathsig field (non-linux build)\") return } cmd.Cmd.SysProcAttr = attrs // Run the command err = cmd.Run() if err != nil {", "commid": "kubernetes_pr_29380"}], "negative_passages": []} {"query_id": "q-en-kubernetes-749738408e786a27678c468a5520c46b775b406c0396b4fc77d0584fc060b72d", "query": "The code in doesn't work for me: does not pass on SIGTERM when the caller is in the same process group. Then the knocks out the process without giving it a chance to kill its child. The bit doesn't help: either the process has already been killed as above, or it gets killed now with SIGKILL; either way the process is still running. Oddly, it does work about one time in ten. I have no explanation for that. I am running from the command-line like this: , on 0-41-generic -Ubuntu SMP Fri Jun 24 11:28:43 UTC 2016 x8664 x8664 x86_64 GNU/Linux CC\nThe manual says \"This is the signal that the calling process will get when its parent dies.\"; it is applied to the process, so that will get killed if the process dies. That isn't what we are looking for:", "positive_passages": [{"docid": "doc-en-kubernetes-df1a74591f1bae243d9b31a68c5d4dc241b531e54cf4b2a56410b3237791fe1c", "text": "const timeout = 10 * time.Second for _, signal := range []string{\"-TERM\", \"-KILL\"} { glog.V(2).Infof(\"Killing process %d (%s) with %s\", pid, name, signal) _, err := exec.Command(\"sudo\", \"kill\", signal, strconv.Itoa(pid)).Output() cmd := exec.Command(\"sudo\", \"kill\", signal, strconv.Itoa(pid)) // Run the 'kill' command in a separate process group so sudo doesn't ignore it cmd.SysProcAttr = &syscall.SysProcAttr{Setpgid: true} _, err := cmd.Output() if err != nil { glog.Errorf(\"Error signaling process %d (%s) with %s: %v\", pid, name, signal, err) continue", "commid": "kubernetes_pr_29380"}], "negative_passages": []} {"query_id": "q-en-kubernetes-749738408e786a27678c468a5520c46b775b406c0396b4fc77d0584fc060b72d", "query": "The code in doesn't work for me: does not pass on SIGTERM when the caller is in the same process group. Then the knocks out the process without giving it a chance to kill its child. The bit doesn't help: either the process has already been killed as above, or it gets killed now with SIGKILL; either way the process is still running. Oddly, it does work about one time in ten. I have no explanation for that. I am running from the command-line like this: , on 0-41-generic -Ubuntu SMP Fri Jun 24 11:28:43 UTC 2016 x8664 x8664 x86_64 GNU/Linux CC\nThe manual says \"This is the signal that the calling process will get when its parent dies.\"; it is applied to the process, so that will get killed if the process dies. That isn't what we are looking for:", "positive_passages": [{"docid": "doc-en-kubernetes-2c34673f1128ad4c99654a24149be721ebdb0a8507b6771ad2341f1f5725ff8a", "text": "cmd.Cmd.Stdout = outfile cmd.Cmd.Stderr = outfile // Killing the sudo command should kill the server as well. // Death of this test process should kill the server as well. attrs := &syscall.SysProcAttr{} // Hack to set linux-only field without build tags. deathSigField := reflect.ValueOf(attrs).Elem().FieldByName(\"Pdeathsig\") if deathSigField.IsValid() { deathSigField.Set(reflect.ValueOf(syscall.SIGKILL)) deathSigField.Set(reflect.ValueOf(syscall.SIGTERM)) } else { cmdErrorChan <- fmt.Errorf(\"Failed to set Pdeathsig field (non-linux build)\") return", "commid": "kubernetes_pr_29685"}], "negative_passages": []} {"query_id": "q-en-kubernetes-749738408e786a27678c468a5520c46b775b406c0396b4fc77d0584fc060b72d", "query": "The code in doesn't work for me: does not pass on SIGTERM when the caller is in the same process group. Then the knocks out the process without giving it a chance to kill its child. The bit doesn't help: either the process has already been killed as above, or it gets killed now with SIGKILL; either way the process is still running. Oddly, it does work about one time in ten. I have no explanation for that. I am running from the command-line like this: , on 0-41-generic -Ubuntu SMP Fri Jun 24 11:28:43 UTC 2016 x8664 x8664 x86_64 GNU/Linux CC\nThe manual says \"This is the signal that the calling process will get when its parent dies.\"; it is applied to the process, so that will get killed if the process dies. That isn't what we are looking for:", "positive_passages": [{"docid": "doc-en-kubernetes-d48bc27e64c54a9f5e850899b2021dc0e74990571c18f5b2a72e072eb3b5c29d", "text": "const timeout = 10 * time.Second for _, signal := range []string{\"-TERM\", \"-KILL\"} { glog.V(2).Infof(\"Killing process %d (%s) with %s\", pid, name, signal) _, err := exec.Command(\"sudo\", \"kill\", signal, strconv.Itoa(pid)).Output() cmd := exec.Command(\"sudo\", \"kill\", signal, strconv.Itoa(pid)) // Run the 'kill' command in a separate process group so sudo doesn't ignore it attrs := &syscall.SysProcAttr{} // Hack to set unix-only field without build tags. setpgidField := reflect.ValueOf(attrs).Elem().FieldByName(\"Setpgid\") if setpgidField.IsValid() { setpgidField.Set(reflect.ValueOf(true)) } else { return fmt.Errorf(\"Failed to set Setpgid field (non-unix build)\") } cmd.SysProcAttr = attrs _, err := cmd.Output() if err != nil { glog.Errorf(\"Error signaling process %d (%s) with %s: %v\", pid, name, signal, err) continue", "commid": "kubernetes_pr_29685"}], "negative_passages": []} {"query_id": "q-en-kubernetes-df06935dc12cccd535d95a7add46acf8ed7f7b060e66bb7b117b3eccd06850f8", "query": "https://k8s- Failed: [] [Feature:Example] [] Storm should create and stop Zookeeper, Nimbus and Storm worker servers {Kubernetes e2e suite}\nhttps://k8s- Failed: [] [Feature:Example] [] Storm should create and stop Zookeeper, Nimbus and Storm worker servers {Kubernetes e2e suite}\nhttps://k8s- Failed: [] [Feature:Example] [] Storm should create and stop Zookeeper, Nimbus and Storm worker servers {Kubernetes e2e suite}\nhttps://k8s- Failed: [] [Feature:Example] [] Storm should create and stop Zookeeper, Nimbus and Storm worker servers {Kubernetes e2e suite}\nhttps://k8s- Failed: [] [Feature:Example] [] Storm should create and stop Zookeeper, Nimbus and Storm worker servers {Kubernetes e2e suite}\n[FLAKE-PING] This flaky-test issue would love to have more attention...\nThis has been marked with the \"needs attention\" label and needs to be addressed for the 1.4 release. Could you please close if it's fixed update on your progress to fix it make a case for it not to block the release or find a new owner who can do one of the above soon. Once you or the new owner update this issue and are actively working on it, you can remove the \"needs attention\" label.\nthe following message looks suspicious, looks a DNS issue?\nActually more like a network connectivity issue\ndo you happen to have the Dockerfile for these images? We better push them into\nI don't think this is a blocking issue. It looks a transient issue. Since the example is largely dependent on whether the image can be fetched and how long it takes to pull the image, it is very environment sensitive. I would recommend removing label. Longer term, we should move the images used in examples to gcr.\nhttps://k8s- Failed: [] [Feature:Example] [] Storm should create and stop Zookeeper, Nimbus and Storm worker servers {Kubernetes e2e suite}\n[FLAKE-PING] This flaky-test issue would love to have more attention.\nfor the last flake, based on , image pull (mattf/zookeeper:latest) started at 42:40, finished at 48:31, about 5min51sec, well passed 5 min\nstorm-nimbus, storm-worker and zookeeper are from they're all automated builds from github (quick link: and )\n[FLAKE-PING] This flaky-test issue would love to have more attention.\nhttps://k8s- Failed: [] [Feature:Example] [] Storm should create and stop Zookeeper, Nimbus and Storm worker servers {Kubernetes e2e suite}", "positive_passages": [{"docid": "doc-en-kubernetes-391fb937369b2ba03bdf1df5a52593de1b3863c8eb27d79ca1528d7547f084a1", "text": "workerControllerJson := mkpath(\"storm-worker-controller.json\") nsFlag := fmt.Sprintf(\"--namespace=%v\", ns) zookeeperPod := \"zookeeper\" nimbusPod := \"nimbus\" By(\"starting Zookeeper\") framework.RunKubectlOrDie(\"create\", \"-f\", zookeeperPodJson, nsFlag) framework.RunKubectlOrDie(\"create\", \"-f\", zookeeperServiceJson, nsFlag) err := framework.WaitForPodNameRunningInNamespace(c, zookeeperPod, ns) err := f.WaitForPodRunningSlow(zookeeperPod) Expect(err).NotTo(HaveOccurred()) By(\"checking if zookeeper is up and running\")", "commid": "kubernetes_pr_32135"}], "negative_passages": []} {"query_id": "q-en-kubernetes-df06935dc12cccd535d95a7add46acf8ed7f7b060e66bb7b117b3eccd06850f8", "query": "https://k8s- Failed: [] [Feature:Example] [] Storm should create and stop Zookeeper, Nimbus and Storm worker servers {Kubernetes e2e suite}\nhttps://k8s- Failed: [] [Feature:Example] [] Storm should create and stop Zookeeper, Nimbus and Storm worker servers {Kubernetes e2e suite}\nhttps://k8s- Failed: [] [Feature:Example] [] Storm should create and stop Zookeeper, Nimbus and Storm worker servers {Kubernetes e2e suite}\nhttps://k8s- Failed: [] [Feature:Example] [] Storm should create and stop Zookeeper, Nimbus and Storm worker servers {Kubernetes e2e suite}\nhttps://k8s- Failed: [] [Feature:Example] [] Storm should create and stop Zookeeper, Nimbus and Storm worker servers {Kubernetes e2e suite}\n[FLAKE-PING] This flaky-test issue would love to have more attention...\nThis has been marked with the \"needs attention\" label and needs to be addressed for the 1.4 release. Could you please close if it's fixed update on your progress to fix it make a case for it not to block the release or find a new owner who can do one of the above soon. Once you or the new owner update this issue and are actively working on it, you can remove the \"needs attention\" label.\nthe following message looks suspicious, looks a DNS issue?\nActually more like a network connectivity issue\ndo you happen to have the Dockerfile for these images? We better push them into\nI don't think this is a blocking issue. It looks a transient issue. Since the example is largely dependent on whether the image can be fetched and how long it takes to pull the image, it is very environment sensitive. I would recommend removing label. Longer term, we should move the images used in examples to gcr.\nhttps://k8s- Failed: [] [Feature:Example] [] Storm should create and stop Zookeeper, Nimbus and Storm worker servers {Kubernetes e2e suite}\n[FLAKE-PING] This flaky-test issue would love to have more attention.\nfor the last flake, based on , image pull (mattf/zookeeper:latest) started at 42:40, finished at 48:31, about 5min51sec, well passed 5 min\nstorm-nimbus, storm-worker and zookeeper are from they're all automated builds from github (quick link: and )\n[FLAKE-PING] This flaky-test issue would love to have more attention.\nhttps://k8s- Failed: [] [Feature:Example] [] Storm should create and stop Zookeeper, Nimbus and Storm worker servers {Kubernetes e2e suite}", "positive_passages": [{"docid": "doc-en-kubernetes-870f2abc301ab35fecb026005d1e81e3dd7e93b5f8320215cb2b83d0f9d351c1", "text": "By(\"starting Nimbus\") framework.RunKubectlOrDie(\"create\", \"-f\", nimbusPodJson, nsFlag) framework.RunKubectlOrDie(\"create\", \"-f\", nimbusServiceJson, nsFlag) err = framework.WaitForPodNameRunningInNamespace(c, \"nimbus\", ns) err = f.WaitForPodRunningSlow(nimbusPod) Expect(err).NotTo(HaveOccurred()) err = framework.WaitForEndpoint(c, ns, \"nimbus\")", "commid": "kubernetes_pr_32135"}], "negative_passages": []} {"query_id": "q-en-kubernetes-bfcd8c1616e8828754fd2982c8dfa9b62a9f64f48b1a3ca5310d1a2eb1f5c300", "query": "I think got confused between e.g. KUBEMASTEROSDISTRIBUTION and MASTEROS_DISTRIBUTION; it definitely doesn't match GCE. PR coming in once I've tested.\ncc", "positive_passages": [{"docid": "doc-en-kubernetes-7feae0898f875f506bbb7d894001c3c90dbca14aa42b5df29ea0ba1452f9e2d3", "text": "# OS options for minions KUBE_OS_DISTRIBUTION=\"${KUBE_OS_DISTRIBUTION:-jessie}\" KUBE_MASTER_OS_DISTRIBUTION=\"${KUBE_OS_DISTRIBUTION}\" KUBE_NODE_OS_DISTRIBUTION=\"${KUBE_OS_DISTRIBUTION}\" MASTER_OS_DISTRIBUTION=\"${KUBE_OS_DISTRIBUTION}\" NODE_OS_DISTRIBUTION=\"${KUBE_OS_DISTRIBUTION}\" KUBE_NODE_IMAGE=\"${KUBE_NODE_IMAGE:-}\" COREOS_CHANNEL=\"${COREOS_CHANNEL:-alpha}\" CONTAINER_RUNTIME=\"${KUBE_CONTAINER_RUNTIME:-docker}\"", "commid": "kubernetes_pr_29427"}], "negative_passages": []} {"query_id": "q-en-kubernetes-a540a96ec019e4374a1aed3eeb2f1a3870891c79dbab54fc2c3626988ef08e83", "query": "Description of problem Create a petset, when pod is evicted, the petset can't create a new pod on other node and make sure pod is running. Kubernetes version (use ): v1.4.0-alpha.2.1427+ How to reproduce it (as minimally and precisely as possible): kubele eviction-hard=\"memory.available<${value}\" a peset and wait all pod is running node MemoryPressure=true all pod status What happened: is evicted and petset can't create new pod and make sure pod is running. What you expected to happen: create new pod on other node and make sure pod is running.\nAlso when node come back with sufficient memory, petset can't make pod running again.\nFYI\ncc\nWas able to reproduce it. Pet set controller just loops with the following logs I0830 12:32:52. 9 ] Syncing PetSet default/cockroachdb with 3 pets I0830 12:32:52. 9 ] PetSet cockroachdb blocked from scaling on pet cockroachdb-0 I0830 12:32:52. 9 ] PetSet cockroachdb waiting on unhealthy pet cockroachdb-0 I0830 12:32:52. 9 ] PetSet cockroachdb waiting on unhealthy pet cockroachdb-1 I0830 12:32:52. 9 ] PetSet cockroachdb waiting on unhealthy pet cockroachdb-2\n-- do you have the petset YAML that we can evaluate? What is the quality of service tier for the pods produced by the petset? Can you include the of an evicted petset pod so we can see the final resource requirements spec?\nI am able to reproduce it with nginx pet set in e2e test. Here is yaml output of it Also it seems that i have working fix and almost finished debugging e2e test (patchset is linked above). Will be grateful for review.\nAh, this helps clear things up... the linked pr shows the problem in petset controller and not kubelet...\nDoes it make sense to set milestone 1.4 for this issue? I think the fix is ready for review", "positive_passages": [{"docid": "doc-en-kubernetes-50a4e1d31fc54a627222a6d12c44ae5160b4d1683babc7bc04e43fb6be5aa895", "text": "if err := p.SyncPVCs(pet); err != nil { return err } if exists { // if pet failed - we need to remove old one because of consistent naming if exists && realPet.pod.Status.Phase == api.PodFailed { glog.V(4).Infof(\"Delete evicted pod %v\", realPet.pod.Name) if err := p.petClient.Delete(realPet); err != nil { return err } } else if exists { if !p.isHealthy(realPet.pod) { glog.Infof(\"PetSet %v waiting on unhealthy pet %v\", pet.parent.Name, realPet.pod.Name) }", "commid": "kubernetes_pr_31777"}], "negative_passages": []} {"query_id": "q-en-kubernetes-a540a96ec019e4374a1aed3eeb2f1a3870891c79dbab54fc2c3626988ef08e83", "query": "Description of problem Create a petset, when pod is evicted, the petset can't create a new pod on other node and make sure pod is running. Kubernetes version (use ): v1.4.0-alpha.2.1427+ How to reproduce it (as minimally and precisely as possible): kubele eviction-hard=\"memory.available<${value}\" a peset and wait all pod is running node MemoryPressure=true all pod status What happened: is evicted and petset can't create new pod and make sure pod is running. What you expected to happen: create new pod on other node and make sure pod is running.\nAlso when node come back with sufficient memory, petset can't make pod running again.\nFYI\ncc\nWas able to reproduce it. Pet set controller just loops with the following logs I0830 12:32:52. 9 ] Syncing PetSet default/cockroachdb with 3 pets I0830 12:32:52. 9 ] PetSet cockroachdb blocked from scaling on pet cockroachdb-0 I0830 12:32:52. 9 ] PetSet cockroachdb waiting on unhealthy pet cockroachdb-0 I0830 12:32:52. 9 ] PetSet cockroachdb waiting on unhealthy pet cockroachdb-1 I0830 12:32:52. 9 ] PetSet cockroachdb waiting on unhealthy pet cockroachdb-2\n-- do you have the petset YAML that we can evaluate? What is the quality of service tier for the pods produced by the petset? Can you include the of an evicted petset pod so we can see the final resource requirements spec?\nI am able to reproduce it with nginx pet set in e2e test. Here is yaml output of it Also it seems that i have working fix and almost finished debugging e2e test (patchset is linked above). Will be grateful for review.\nAh, this helps clear things up... the linked pr shows the problem in petset controller and not kubelet...\nDoes it make sense to set milestone 1.4 for this issue? I think the fix is ready for review", "positive_passages": [{"docid": "doc-en-kubernetes-41c7831f97f42c1048ac0edcaac53f5d4ca7dec7dba6b8fc7630067fc506bd02", "text": "\"k8s.io/kubernetes/pkg/controller/petset\" \"k8s.io/kubernetes/pkg/labels\" \"k8s.io/kubernetes/pkg/runtime\" \"k8s.io/kubernetes/pkg/types\" \"k8s.io/kubernetes/pkg/util/sets\" \"k8s.io/kubernetes/pkg/util/wait\" utilyaml \"k8s.io/kubernetes/pkg/util/yaml\" \"k8s.io/kubernetes/pkg/watch\" \"k8s.io/kubernetes/test/e2e/framework\" ) const ( petsetPoll = 10 * time.Second // Some pets install base packages via wget petsetTimeout = 10 * time.Minute petsetTimeout = 10 * time.Minute // Timeout for pet pods to change state petPodTimeout = 5 * time.Minute zookeeperManifestPath = \"test/e2e/testing-manifests/petset/zookeeper\" mysqlGaleraManifestPath = \"test/e2e/testing-manifests/petset/mysql-galera\" redisManifestPath = \"test/e2e/testing-manifests/petset/redis\"", "commid": "kubernetes_pr_31777"}], "negative_passages": []} {"query_id": "q-en-kubernetes-a540a96ec019e4374a1aed3eeb2f1a3870891c79dbab54fc2c3626988ef08e83", "query": "Description of problem Create a petset, when pod is evicted, the petset can't create a new pod on other node and make sure pod is running. Kubernetes version (use ): v1.4.0-alpha.2.1427+ How to reproduce it (as minimally and precisely as possible): kubele eviction-hard=\"memory.available<${value}\" a peset and wait all pod is running node MemoryPressure=true all pod status What happened: is evicted and petset can't create new pod and make sure pod is running. What you expected to happen: create new pod on other node and make sure pod is running.\nAlso when node come back with sufficient memory, petset can't make pod running again.\nFYI\ncc\nWas able to reproduce it. Pet set controller just loops with the following logs I0830 12:32:52. 9 ] Syncing PetSet default/cockroachdb with 3 pets I0830 12:32:52. 9 ] PetSet cockroachdb blocked from scaling on pet cockroachdb-0 I0830 12:32:52. 9 ] PetSet cockroachdb waiting on unhealthy pet cockroachdb-0 I0830 12:32:52. 9 ] PetSet cockroachdb waiting on unhealthy pet cockroachdb-1 I0830 12:32:52. 9 ] PetSet cockroachdb waiting on unhealthy pet cockroachdb-2\n-- do you have the petset YAML that we can evaluate? What is the quality of service tier for the pods produced by the petset? Can you include the of an evicted petset pod so we can see the final resource requirements spec?\nI am able to reproduce it with nginx pet set in e2e test. Here is yaml output of it Also it seems that i have working fix and almost finished debugging e2e test (patchset is linked above). Will be grateful for review.\nAh, this helps clear things up... the linked pr shows the problem in petset controller and not kubelet...\nDoes it make sense to set milestone 1.4 for this issue? I think the fix is ready for review", "positive_passages": [{"docid": "doc-en-kubernetes-7e03950edfab3b0154b44aac0b275c03d68ad4cb0bab416f009856c0fb20bac6", "text": "}) }) var _ = framework.KubeDescribe(\"Pet set recreate [Slow] [Feature:PetSet]\", func() { f := framework.NewDefaultFramework(\"pet-set-recreate\") var c *client.Client var ns string labels := map[string]string{ \"foo\": \"bar\", \"baz\": \"blah\", } headlessSvcName := \"test\" podName := \"test-pod\" petSetName := \"web\" petPodName := \"web-0\" BeforeEach(func() { framework.SkipUnlessProviderIs(\"gce\", \"vagrant\") By(\"creating service \" + headlessSvcName + \" in namespace \" + f.Namespace.Name) headlessService := createServiceSpec(headlessSvcName, \"\", true, labels) _, err := f.Client.Services(f.Namespace.Name).Create(headlessService) framework.ExpectNoError(err) c = f.Client ns = f.Namespace.Name }) AfterEach(func() { if CurrentGinkgoTestDescription().Failed { dumpDebugInfo(c, ns) } By(\"Deleting all petset in ns \" + ns) deleteAllPetSets(c, ns) }) It(\"should recreate evicted petset\", func() { By(\"looking for a node to schedule pet set and pod\") nodes := framework.GetReadySchedulableNodesOrDie(f.Client) node := nodes.Items[0] By(\"creating pod with conflicting port in namespace \" + f.Namespace.Name) conflictingPort := api.ContainerPort{HostPort: 21017, ContainerPort: 21017, Name: \"conflict\"} pod := &api.Pod{ ObjectMeta: api.ObjectMeta{ Name: podName, }, Spec: api.PodSpec{ Containers: []api.Container{ { Name: \"nginx\", Image: \"gcr.io/google_containers/nginx-slim:0.7\", Ports: []api.ContainerPort{conflictingPort}, }, }, NodeName: node.Name, }, } pod, err := f.Client.Pods(f.Namespace.Name).Create(pod) framework.ExpectNoError(err) By(\"creating petset with conflicting port in namespace \" + f.Namespace.Name) ps := newPetSet(petSetName, f.Namespace.Name, headlessSvcName, 1, nil, nil, labels) petContainer := &ps.Spec.Template.Spec.Containers[0] petContainer.Ports = append(petContainer.Ports, conflictingPort) ps.Spec.Template.Spec.NodeName = node.Name _, err = f.Client.Apps().PetSets(f.Namespace.Name).Create(ps) framework.ExpectNoError(err) By(\"waiting until pod \" + podName + \" will start running in namespace \" + f.Namespace.Name) if err := f.WaitForPodRunning(podName); err != nil { framework.Failf(\"Pod %v did not start running: %v\", podName, err) } var initialPetPodUID types.UID By(\"waiting until pet pod \" + petPodName + \" will be recreated and deleted at least once in namespace \" + f.Namespace.Name) w, err := f.Client.Pods(f.Namespace.Name).Watch(api.SingleObject(api.ObjectMeta{Name: petPodName})) framework.ExpectNoError(err) // we need to get UID from pod in any state and wait until pet set controller will remove pod atleast once _, err = watch.Until(petPodTimeout, w, func(event watch.Event) (bool, error) { pod := event.Object.(*api.Pod) switch event.Type { case watch.Deleted: framework.Logf(\"Observed delete event for pet pod %v in namespace %v\", pod.Name, pod.Namespace) if initialPetPodUID == \"\" { return false, nil } return true, nil } framework.Logf(\"Observed pet pod in namespace: %v, name: %v, uid: %v, status phase: %v. Waiting for petset controller to delete.\", pod.Namespace, pod.Name, pod.UID, pod.Status.Phase) initialPetPodUID = pod.UID return false, nil }) if err != nil { framework.Failf(\"Pod %v expected to be re-created atleast once\", petPodName) } By(\"removing pod with conflicting port in namespace \" + f.Namespace.Name) err = f.Client.Pods(f.Namespace.Name).Delete(pod.Name, api.NewDeleteOptions(0)) framework.ExpectNoError(err) By(\"waiting when pet pod \" + petPodName + \" will be recreated in namespace \" + f.Namespace.Name + \" and will be in running state\") // we may catch delete event, thats why we are waiting for running phase like this, and not with watch.Until Eventually(func() error { petPod, err := f.Client.Pods(f.Namespace.Name).Get(petPodName) if err != nil { return err } if petPod.Status.Phase != api.PodRunning { return fmt.Errorf(\"Pod %v is not in running phase: %v\", petPod.Name, petPod.Status.Phase) } else if petPod.UID == initialPetPodUID { return fmt.Errorf(\"Pod %v wasn't recreated: %v == %v\", petPod.Name, petPod.UID, initialPetPodUID) } return nil }, petPodTimeout, 2*time.Second).Should(BeNil()) }) }) func dumpDebugInfo(c *client.Client, ns string) { pl, _ := c.Pods(ns).List(api.ListOptions{LabelSelector: labels.Everything()}) for _, p := range pl.Items {", "commid": "kubernetes_pr_31777"}], "negative_passages": []} {"query_id": "q-en-kubernetes-5ed5c1e20e14fe5a611a17f918cbdb7750120c4d2b83d5cbf5c92766c9f4890f", "query": "https://k8s- Failed: [] Load capacity [Feature:Performance] should be able to handle 30 pods per node {Kubernetes e2e suite} Previous issues for this test:\nhttps://k8s- Failed: [] Load capacity [Feature:Performance] should be able to handle 30 pods per node {Kubernetes e2e suite}\nhttps://k8s- Failed: [] Load capacity [Feature:Performance] should be able to handle 30 pods per node {Kubernetes e2e suite}\nhttps://k8s- Failed: [] Load capacity [Feature:Performance] should be able to handle 30 pods per node {Kubernetes e2e suite}\nhttps://k8s- Failed: [] Load capacity [Feature:Performance] should be able to handle 30 pods per node {Kubernetes e2e suite}\nhttps://k8s- Failed: [] Load capacity [Feature:Performance] should be able to handle 30 pods per node {Kubernetes e2e suite}\nhttps://k8s- Failed: [] Load capacity [Feature:Performance] should be able to handle 30 pods per node {Kubernetes e2e suite}\nhttps://k8s- Failed: [] Load capacity [Feature:Performance] should be able to handle 30 pods per node {Kubernetes e2e suite}\nhttps://k8s- Failed: [] Load capacity [Feature:Performance] should be able to handle 30 pods per node {Kubernetes e2e suite}\nhttps://k8s- Failed: [] Load capacity [Feature:Performance] should be able to handle 30 pods per node {Kubernetes e2e suite}\nhttps://k8s- Failed: [] Load capacity [Feature:Performance] should be able to handle 30 pods per node {Kubernetes e2e suite}\nhttps://k8s- Failed: [] Load capacity [Feature:Performance] should be able to handle 30 pods per node {Kubernetes e2e suite}\nhttps://k8s- Failed: [] Load capacity [Feature:Performance] should be able to handle 30 pods per node {Kubernetes e2e suite}\nhttps://k8s- Failed: [] Load capacity [Feature:Performance] should be able to handle 30 pods per node {Kubernetes e2e suite}\nhttps://k8s- Failed: [] Load capacity [Feature:Performance] should be able to handle 30 pods per node {Kubernetes e2e suite}\nhttps://k8s- Failed: [] Load capacity [Feature:Performance] should be able to handle 30 pods per node {Kubernetes e2e suite}\nhttps://k8s- Failed: [] Load capacity [Feature:Performance] should be able to handle 30 pods per node {Kubernetes e2e suite}\nhttps://k8s- Failed: [] Load capacity [Feature:Performance] should be able to handle 30 pods per node {Kubernetes e2e suite}\nhttps://k8s- Failed: [] Load capacity [Feature:Performance] should be able to handle 30 pods per node {Kubernetes e2e suite}\nhttps://k8s- Failed: [] Load capacity [Feature:Performance] should be able to handle 30 pods per node {Kubernetes e2e suite}\nhttps://k8s- Failed: [] Load capacity [Feature:Performance] should be able to handle 30 pods per node {Kubernetes e2e suite}\nhttps://k8s- Failed: [] Load capacity [Feature:Performance] should be able to handle 30 pods per node {Kubernetes e2e suite}\nI'm pretty sure, those amount of failure are related to: (I don't know why yet).\nhttps://k8s- Failed: [] Load capacity [Feature:Performance] should be able to handle 30 pods per node {Kubernetes e2e suite}\nhttps://k8s- Failed: [] Load capacity [Feature:Performance] should be able to handle 30 pods per node {Kubernetes e2e suite}\nhttps://k8s- Failed: [] Load capacity [Feature:Performance] should be able to handle 30 pods per node {Kubernetes e2e suite}\nhttps://k8s- Failed: [] Load capacity [Feature:Performance] should be able to handle 30 pods per node {Kubernetes e2e suite}\nhttps://k8s- Failed: [] Load capacity [Feature:Performance] should be able to handle 30 pods per node {Kubernetes e2e suite}\nhttps://k8s- Failed: [] Load capacity [Feature:Performance] should be able to handle 30 pods per node {Kubernetes e2e suite}\nhttps://k8s- Failed: [] Load capacity [Feature:Performance] should be able to handle 30 pods per node {Kubernetes e2e suite}\nhttps://k8s- Failed: [] Load capacity [Feature:Performance] should be able to handle 30 pods per node {Kubernetes e2e suite}\nhttps://k8s- Failed: [] Load capacity [Feature:Performance] should be able to handle 30 pods per node {Kubernetes e2e suite}\nhttps://k8s- Failed: [] Load capacity [Feature:Performance] should be able to handle 30 pods per node {Kubernetes e2e suite}\nhttps://k8s- Failed: [] Load capacity [Feature:Performance] should be able to handle 30 pods per node {Kubernetes e2e suite}\nhttps://k8s- Failed: [] Load capacity [Feature:Performance] should be able to handle 30 pods per node {Kubernetes e2e suite}\nhttps://k8s- Failed: [] Load capacity [Feature:Performance] should be able to handle 30 pods per node {Kubernetes e2e suite}\nhttps://k8s- Failed: [] Load capacity [Feature:Performance] should be able to handle 30 pods per node {Kubernetes e2e suite}", "positive_passages": [{"docid": "doc-en-kubernetes-372696e4ae48d7218ba6a463315c3689a491efa5d974d204606a29e2317863e9", "text": "return err } if waitForReplicas != nil { watchOptions := api.ListOptions{FieldSelector: fields.OneTermEqualSelector(\"metadata.name\", name), ResourceVersion: updatedResourceVersion} watcher, err := scaler.c.ReplicationControllers(namespace).Watch(watchOptions) checkRC := func(rc *api.ReplicationController) bool { if uint(rc.Spec.Replicas) != newSize { // the size is changed by other party. Don't need to wait for the new change to complete. return true } return rc.Status.ObservedGeneration >= rc.Generation && rc.Status.Replicas == rc.Spec.Replicas } // If number of replicas doesn't change, then the update may not event // be sent to underlying databse (we don't send no-op changes). // In such case, will have value of the most // recent update (which may be far in the past) so we may get \"too old // RV\" error from watch or potentially no ReplicationController events // will be deliver, since it may already be in the expected state. // To protect from these two, we first issue Get() to ensure that we // are not already in the expected state. currentRC, err := scaler.c.ReplicationControllers(namespace).Get(name) if err != nil { return err } _, err = watch.Until(waitForReplicas.Timeout, watcher, func(event watch.Event) (bool, error) { if event.Type != watch.Added && event.Type != watch.Modified { return false, nil if !checkRC(currentRC) { watchOptions := api.ListOptions{ FieldSelector: fields.OneTermEqualSelector(\"metadata.name\", name), ResourceVersion: updatedResourceVersion, } rc := event.Object.(*api.ReplicationController) if uint(rc.Spec.Replicas) != newSize { // the size is changed by other party. Don't need to wait for the new change to complete. return true, nil watcher, err := scaler.c.ReplicationControllers(namespace).Watch(watchOptions) if err != nil { return err } return rc.Status.ObservedGeneration >= rc.Generation && rc.Status.Replicas == rc.Spec.Replicas, nil }) if err == wait.ErrWaitTimeout { return fmt.Errorf(\"timed out waiting for %q to be synced\", name) _, err = watch.Until(waitForReplicas.Timeout, watcher, func(event watch.Event) (bool, error) { if event.Type != watch.Added && event.Type != watch.Modified { return false, nil } return checkRC(event.Object.(*api.ReplicationController)), nil }) if err == wait.ErrWaitTimeout { return fmt.Errorf(\"timed out waiting for %q to be synced\", name) } return err } return err } return nil }", "commid": "kubernetes_pr_31421"}], "negative_passages": []} {"query_id": "q-en-kubernetes-5ed5c1e20e14fe5a611a17f918cbdb7750120c4d2b83d5cbf5c92766c9f4890f", "query": "https://k8s- Failed: [] Load capacity [Feature:Performance] should be able to handle 30 pods per node {Kubernetes e2e suite} Previous issues for this test:\nhttps://k8s- Failed: [] Load capacity [Feature:Performance] should be able to handle 30 pods per node {Kubernetes e2e suite}\nhttps://k8s- Failed: [] Load capacity [Feature:Performance] should be able to handle 30 pods per node {Kubernetes e2e suite}\nhttps://k8s- Failed: [] Load capacity [Feature:Performance] should be able to handle 30 pods per node {Kubernetes e2e suite}\nhttps://k8s- Failed: [] Load capacity [Feature:Performance] should be able to handle 30 pods per node {Kubernetes e2e suite}\nhttps://k8s- Failed: [] Load capacity [Feature:Performance] should be able to handle 30 pods per node {Kubernetes e2e suite}\nhttps://k8s- Failed: [] Load capacity [Feature:Performance] should be able to handle 30 pods per node {Kubernetes e2e suite}\nhttps://k8s- Failed: [] Load capacity [Feature:Performance] should be able to handle 30 pods per node {Kubernetes e2e suite}\nhttps://k8s- Failed: [] Load capacity [Feature:Performance] should be able to handle 30 pods per node {Kubernetes e2e suite}\nhttps://k8s- Failed: [] Load capacity [Feature:Performance] should be able to handle 30 pods per node {Kubernetes e2e suite}\nhttps://k8s- Failed: [] Load capacity [Feature:Performance] should be able to handle 30 pods per node {Kubernetes e2e suite}\nhttps://k8s- Failed: [] Load capacity [Feature:Performance] should be able to handle 30 pods per node {Kubernetes e2e suite}\nhttps://k8s- Failed: [] Load capacity [Feature:Performance] should be able to handle 30 pods per node {Kubernetes e2e suite}\nhttps://k8s- Failed: [] Load capacity [Feature:Performance] should be able to handle 30 pods per node {Kubernetes e2e suite}\nhttps://k8s- Failed: [] Load capacity [Feature:Performance] should be able to handle 30 pods per node {Kubernetes e2e suite}\nhttps://k8s- Failed: [] Load capacity [Feature:Performance] should be able to handle 30 pods per node {Kubernetes e2e suite}\nhttps://k8s- Failed: [] Load capacity [Feature:Performance] should be able to handle 30 pods per node {Kubernetes e2e suite}\nhttps://k8s- Failed: [] Load capacity [Feature:Performance] should be able to handle 30 pods per node {Kubernetes e2e suite}\nhttps://k8s- Failed: [] Load capacity [Feature:Performance] should be able to handle 30 pods per node {Kubernetes e2e suite}\nhttps://k8s- Failed: [] Load capacity [Feature:Performance] should be able to handle 30 pods per node {Kubernetes e2e suite}\nhttps://k8s- Failed: [] Load capacity [Feature:Performance] should be able to handle 30 pods per node {Kubernetes e2e suite}\nhttps://k8s- Failed: [] Load capacity [Feature:Performance] should be able to handle 30 pods per node {Kubernetes e2e suite}\nI'm pretty sure, those amount of failure are related to: (I don't know why yet).\nhttps://k8s- Failed: [] Load capacity [Feature:Performance] should be able to handle 30 pods per node {Kubernetes e2e suite}\nhttps://k8s- Failed: [] Load capacity [Feature:Performance] should be able to handle 30 pods per node {Kubernetes e2e suite}\nhttps://k8s- Failed: [] Load capacity [Feature:Performance] should be able to handle 30 pods per node {Kubernetes e2e suite}\nhttps://k8s- Failed: [] Load capacity [Feature:Performance] should be able to handle 30 pods per node {Kubernetes e2e suite}\nhttps://k8s- Failed: [] Load capacity [Feature:Performance] should be able to handle 30 pods per node {Kubernetes e2e suite}\nhttps://k8s- Failed: [] Load capacity [Feature:Performance] should be able to handle 30 pods per node {Kubernetes e2e suite}\nhttps://k8s- Failed: [] Load capacity [Feature:Performance] should be able to handle 30 pods per node {Kubernetes e2e suite}\nhttps://k8s- Failed: [] Load capacity [Feature:Performance] should be able to handle 30 pods per node {Kubernetes e2e suite}\nhttps://k8s- Failed: [] Load capacity [Feature:Performance] should be able to handle 30 pods per node {Kubernetes e2e suite}\nhttps://k8s- Failed: [] Load capacity [Feature:Performance] should be able to handle 30 pods per node {Kubernetes e2e suite}\nhttps://k8s- Failed: [] Load capacity [Feature:Performance] should be able to handle 30 pods per node {Kubernetes e2e suite}\nhttps://k8s- Failed: [] Load capacity [Feature:Performance] should be able to handle 30 pods per node {Kubernetes e2e suite}\nhttps://k8s- Failed: [] Load capacity [Feature:Performance] should be able to handle 30 pods per node {Kubernetes e2e suite}\nhttps://k8s- Failed: [] Load capacity [Feature:Performance] should be able to handle 30 pods per node {Kubernetes e2e suite}", "positive_passages": [{"docid": "doc-en-kubernetes-057311196765cb0e7f92e30190d09f1e0654a8d0171a68f24d80a16bb92963fb", "text": "}, }, StopError: nil, ExpectedActions: []string{\"get\", \"list\", \"get\", \"update\", \"watch\", \"delete\"}, ExpectedActions: []string{\"get\", \"list\", \"get\", \"update\", \"get\", \"delete\"}, }, { Name: \"NoOverlapping\",", "commid": "kubernetes_pr_31421"}], "negative_passages": []} {"query_id": "q-en-kubernetes-5ed5c1e20e14fe5a611a17f918cbdb7750120c4d2b83d5cbf5c92766c9f4890f", "query": "https://k8s- Failed: [] Load capacity [Feature:Performance] should be able to handle 30 pods per node {Kubernetes e2e suite} Previous issues for this test:\nhttps://k8s- Failed: [] Load capacity [Feature:Performance] should be able to handle 30 pods per node {Kubernetes e2e suite}\nhttps://k8s- Failed: [] Load capacity [Feature:Performance] should be able to handle 30 pods per node {Kubernetes e2e suite}\nhttps://k8s- Failed: [] Load capacity [Feature:Performance] should be able to handle 30 pods per node {Kubernetes e2e suite}\nhttps://k8s- Failed: [] Load capacity [Feature:Performance] should be able to handle 30 pods per node {Kubernetes e2e suite}\nhttps://k8s- Failed: [] Load capacity [Feature:Performance] should be able to handle 30 pods per node {Kubernetes e2e suite}\nhttps://k8s- Failed: [] Load capacity [Feature:Performance] should be able to handle 30 pods per node {Kubernetes e2e suite}\nhttps://k8s- Failed: [] Load capacity [Feature:Performance] should be able to handle 30 pods per node {Kubernetes e2e suite}\nhttps://k8s- Failed: [] Load capacity [Feature:Performance] should be able to handle 30 pods per node {Kubernetes e2e suite}\nhttps://k8s- Failed: [] Load capacity [Feature:Performance] should be able to handle 30 pods per node {Kubernetes e2e suite}\nhttps://k8s- Failed: [] Load capacity [Feature:Performance] should be able to handle 30 pods per node {Kubernetes e2e suite}\nhttps://k8s- Failed: [] Load capacity [Feature:Performance] should be able to handle 30 pods per node {Kubernetes e2e suite}\nhttps://k8s- Failed: [] Load capacity [Feature:Performance] should be able to handle 30 pods per node {Kubernetes e2e suite}\nhttps://k8s- Failed: [] Load capacity [Feature:Performance] should be able to handle 30 pods per node {Kubernetes e2e suite}\nhttps://k8s- Failed: [] Load capacity [Feature:Performance] should be able to handle 30 pods per node {Kubernetes e2e suite}\nhttps://k8s- Failed: [] Load capacity [Feature:Performance] should be able to handle 30 pods per node {Kubernetes e2e suite}\nhttps://k8s- Failed: [] Load capacity [Feature:Performance] should be able to handle 30 pods per node {Kubernetes e2e suite}\nhttps://k8s- Failed: [] Load capacity [Feature:Performance] should be able to handle 30 pods per node {Kubernetes e2e suite}\nhttps://k8s- Failed: [] Load capacity [Feature:Performance] should be able to handle 30 pods per node {Kubernetes e2e suite}\nhttps://k8s- Failed: [] Load capacity [Feature:Performance] should be able to handle 30 pods per node {Kubernetes e2e suite}\nhttps://k8s- Failed: [] Load capacity [Feature:Performance] should be able to handle 30 pods per node {Kubernetes e2e suite}\nhttps://k8s- Failed: [] Load capacity [Feature:Performance] should be able to handle 30 pods per node {Kubernetes e2e suite}\nI'm pretty sure, those amount of failure are related to: (I don't know why yet).\nhttps://k8s- Failed: [] Load capacity [Feature:Performance] should be able to handle 30 pods per node {Kubernetes e2e suite}\nhttps://k8s- Failed: [] Load capacity [Feature:Performance] should be able to handle 30 pods per node {Kubernetes e2e suite}\nhttps://k8s- Failed: [] Load capacity [Feature:Performance] should be able to handle 30 pods per node {Kubernetes e2e suite}\nhttps://k8s- Failed: [] Load capacity [Feature:Performance] should be able to handle 30 pods per node {Kubernetes e2e suite}\nhttps://k8s- Failed: [] Load capacity [Feature:Performance] should be able to handle 30 pods per node {Kubernetes e2e suite}\nhttps://k8s- Failed: [] Load capacity [Feature:Performance] should be able to handle 30 pods per node {Kubernetes e2e suite}\nhttps://k8s- Failed: [] Load capacity [Feature:Performance] should be able to handle 30 pods per node {Kubernetes e2e suite}\nhttps://k8s- Failed: [] Load capacity [Feature:Performance] should be able to handle 30 pods per node {Kubernetes e2e suite}\nhttps://k8s- Failed: [] Load capacity [Feature:Performance] should be able to handle 30 pods per node {Kubernetes e2e suite}\nhttps://k8s- Failed: [] Load capacity [Feature:Performance] should be able to handle 30 pods per node {Kubernetes e2e suite}\nhttps://k8s- Failed: [] Load capacity [Feature:Performance] should be able to handle 30 pods per node {Kubernetes e2e suite}\nhttps://k8s- Failed: [] Load capacity [Feature:Performance] should be able to handle 30 pods per node {Kubernetes e2e suite}\nhttps://k8s- Failed: [] Load capacity [Feature:Performance] should be able to handle 30 pods per node {Kubernetes e2e suite}\nhttps://k8s- Failed: [] Load capacity [Feature:Performance] should be able to handle 30 pods per node {Kubernetes e2e suite}", "positive_passages": [{"docid": "doc-en-kubernetes-a6858dc85a4620c11bc5ae5f281d6f3be7f0f950858388f6b0ac32cd30ad20a0", "text": "}, }, StopError: nil, ExpectedActions: []string{\"get\", \"list\", \"get\", \"update\", \"watch\", \"delete\"}, ExpectedActions: []string{\"get\", \"list\", \"get\", \"update\", \"get\", \"delete\"}, }, { Name: \"OverlappingError\",", "commid": "kubernetes_pr_31421"}], "negative_passages": []} {"query_id": "q-en-kubernetes-ccf323f57343c53a81e615f5f91e91830995a9424300c72d74f0847dc125d753", "query": " y/a and y/b // Egress allowed to y/a only. Egress to y/b should be blocked // Ingress on y/a and y/b allow traffic from x/a // Expectation: traffic from x/a to y/a allowed only, traffic from x/a to y/b denied by egress policy nsX, nsY, _, model, k8s := getK8SModel(f) // Building egress policy for x/a to y/a only allowedEgressNamespaces := &metav1.LabelSelector{ MatchLabels: map[string]string{ \"ns\": nsY, }, } allowedEgressPods := &metav1.LabelSelector{ MatchLabels: map[string]string{ \"pod\": \"a\", }, } egressPolicy := GetAllowEgressByNamespaceAndPod(\"allow-to-ns-y-pod-a\", map[string]string{\"pod\": \"a\"}, allowedEgressNamespaces, allowedEgressPods) CreatePolicy(k8s, egressPolicy, nsX) // Creating ingress policy to allow from x/a to y/a and y/b allowedIngressNamespaces := &metav1.LabelSelector{ MatchLabels: map[string]string{ \"ns\": nsX, }, } allowedIngressPods := &metav1.LabelSelector{ MatchLabels: map[string]string{ \"pod\": \"a\", }, } allowIngressPolicyPodA := GetAllowIngressByNamespaceAndPod(\"allow-from-xa-on-ya-match-selector\", map[string]string{\"pod\": \"a\"}, allowedIngressNamespaces, allowedIngressPods) allowIngressPolicyPodB := GetAllowIngressByNamespaceAndPod(\"allow-from-xa-on-yb-match-selector\", map[string]string{\"pod\": \"b\"}, allowedIngressNamespaces, allowedIngressPods) CreatePolicy(k8s, allowIngressPolicyPodA, nsY) CreatePolicy(k8s, allowIngressPolicyPodB, nsY) // While applying the policies, traffic needs to be allowed by both egress and ingress rules. // Egress rules only // \txa\txb\txc\tya\tyb\tyc\tza\tzb\tzc // xa\tX\tX\tX\t.\t*X*\tX\tX\tX\tX // xb\t.\t.\t.\t.\t.\t.\t.\t.\t. // xc\t.\t.\t.\t.\t.\t.\t.\t.\t. // ya\t.\t.\t.\t.\t.\t.\t.\t.\t. // yb\t.\t.\t.\t.\t.\t.\t.\t.\t. // yc\t.\t.\t.\t.\t.\t.\t.\t.\t. // za\t.\t.\t.\t.\t.\t.\t.\t.\t. // zb\t.\t.\t.\t.\t.\t.\t.\t.\t. // zc\t.\t.\t.\t.\t.\t.\t.\t.\t. // Ingress rules only // \txa\txb\txc\tya\tyb\tyc\tza\tzb\tzc // xa\t.\t.\t.\t*.*\t.\t.\t.\t.\t. // xb\t.\t.\tX\tX\t.\t.\t.\t.\t. // xc\t.\t.\tX\tX\t.\t.\t.\t.\t. // ya\t.\t.\tX\tX\t.\t.\t.\t.\t. // yb\t.\t.\tX\tX\t.\t.\t.\t.\t. // yc\t.\t.\tX\tX\t.\t.\t.\t.\t. // za\t.\t.\tX\tX\t.\t.\t.\t.\t. // zb\t.\t.\tX\tX\t.\t.\t.\t.\t. // zc\t.\t.\tX\tX\t.\t.\t.\t.\t. // In the resulting truth table, connections from x/a should only be allowed to y/a. x/a to y/b should be blocked by the egress on x/a. // Expected results // \txa\txb\txc\tya\tyb\tyc\tza\tzb\tzc // xa\tX\tX\tX\t.\t*X*\tX\tX\tX\tX // xb\t.\t.\t.\tX\tX\t.\t.\t.\t. // xc\t.\t.\t.\tX\tX\t.\t.\t.\t. // ya\t.\t.\t.\tX\tX\t.\t.\t.\t. // yb\t.\t.\t.\tX\tX\t.\t.\t.\t. // yc\t.\t.\t.\tX\tX\t.\t.\t.\t. // za\t.\t.\t.\tX\tX\t.\t.\t.\t. // zb\t.\t.\t.\tX\tX\t.\t.\t.\t. // zc\t.\t.\t.\tX\tX\t.\t.\t.\t. reachability := NewReachability(model.AllPods(), true) // Default all traffic flows. // Exception: x/a can only egress to y/a, others are false // Exception: y/a can only allow ingress from x/a, others are false // Exception: y/b has no allowed traffic (due to limit on x/a egress) reachability.ExpectPeer(&Peer{Namespace: nsX, Pod: \"a\"}, &Peer{}, false) reachability.ExpectPeer(&Peer{}, &Peer{Namespace: nsY, Pod: \"a\"}, false) reachability.ExpectPeer(&Peer{Namespace: nsX, Pod: \"a\"}, &Peer{Namespace: nsY, Pod: \"a\"}, true) reachability.ExpectPeer(&Peer{}, &Peer{Namespace: nsY, Pod: \"b\"}, false) ValidateOrFail(k8s, model, &TestCase{FromPort: 81, ToPort: 80, Protocol: v1.ProtocolTCP, Reachability: reachability}) }) ginkgo.It(\"should enforce egress policy allowing traffic to a server in a different namespace based on PodSelector and NamespaceSelector [Feature:NetworkPolicy]\", func() { nsX, nsY, _, model, k8s := getK8SModel(f) allowedNamespaces := &metav1.LabelSelector{", "commid": "kubernetes_pr_97524"}], "negative_passages": []} {"query_id": "q-en-kubernetes-b7e5bceb9e558da9f3a941788a29989a1d233b0b46f0b77ff1f2e63e703dc964", "query": "We could either: where appropriate - least amount of code change but cumbersome OS version in the tag instead of the repository /cc\nThis issue is currently awaiting triage. If a SIG or subproject determines this is a relevant issue, they will accept it by applying the label and provide further guidance. The label can be by org members by writing in a comment. IMAGE_WITH_OS_ARCH = $(IMAGE)-$(OS)-$(ARCH) TAG = 3.4 TAG = 3.4.1 REV = $(shell git describe --contains --always --match='v*') # Architectures supported: amd64, arm, arm64, ppc64le and s390x", "commid": "kubernetes_pr_97782"}], "negative_passages": []} {"query_id": "q-en-kubernetes-b7e5bceb9e558da9f3a941788a29989a1d233b0b46f0b77ff1f2e63e703dc964", "query": "We could either: where appropriate - least amount of code change but cumbersome OS version in the tag instead of the repository /cc\nThis issue is currently awaiting triage. If a SIG or subproject determines this is a relevant issue, they will accept it by applying the label and provide further guidance. The label can be by org members by writing in a comment. docker manifest create --amend $(IMAGE):$(TAG) $(shell echo $(ALL_OS_ARCH) | sed -e \"s~[^ ]*~$(IMAGE)-&:$(TAG)~g\") set -x; for arch in $(ALL_ARCH.linux); do docker manifest annotate --os linux --arch $${arch} ${IMAGE}:${TAG} ${IMAGE}-linux-$${arch}:${TAG}; done docker manifest create --amend $(IMAGE):$(TAG) $(shell echo $(ALL_OS_ARCH) | sed -e \"s~[^ ]*~$(IMAGE):$(TAG)-&~g\") set -x; for arch in $(ALL_ARCH.linux); do docker manifest annotate --os linux --arch $${arch} ${IMAGE}:${TAG} ${IMAGE}:${TAG}-linux-$${arch}; done # For Windows images, we also need to include the \"os.version\" in the manifest list, so the Windows node can pull the proper image it needs. # At the moment, docker manifest annotate doesn't allow us to set the os.version, so we'll have to it ourselves. The manifest list can be found locally as JSONs. # See: https://github.com/moby/moby/issues/41417", "commid": "kubernetes_pr_97782"}], "negative_passages": []} {"query_id": "q-en-kubernetes-b7e5bceb9e558da9f3a941788a29989a1d233b0b46f0b77ff1f2e63e703dc964", "query": "We could either: where appropriate - least amount of code change but cumbersome OS version in the tag instead of the repository /cc\nThis issue is currently awaiting triage. If a SIG or subproject determines this is a relevant issue, they will accept it by applying the label and provide further guidance. The label can be by org members by writing in a comment. docker manifest annotate --os windows --arch $${arch} ${IMAGE}:${TAG} ${IMAGE}-windows-$${arch}-$${osversion}:${TAG}; docker manifest annotate --os windows --arch $${arch} ${IMAGE}:${TAG} ${IMAGE}:${TAG}-windows-$${arch}-$${osversion}; BASEIMAGE=${BASE.windows}:$${osversion}; full_version=`docker manifest inspect ${BASE.windows}:$${osversion} | grep \"os.version\" | head -n 1 | awk '{print $$2}'` || true; sed -i -r \"s/(\"os\":\"windows\")/0,\"os.version\":$${full_version}/\" \"${HOME}/.docker/manifests/$${manifest_image_folder}-${TAG}/$${manifest_image_folder}-windows-$${arch}-$${osversion}-${TAG}\"; sed -i -r \"s/(\"os\":\"windows\")/0,\"os.version\":$${full_version}/\" \"${HOME}/.docker/manifests/$${manifest_image_folder}-${TAG}/$${manifest_image_folder}-${TAG}-windows-$${arch}-$${osversion}\"; done; done docker manifest push --purge ${IMAGE}:${TAG}", "commid": "kubernetes_pr_97782"}], "negative_passages": []} {"query_id": "q-en-kubernetes-b7e5bceb9e558da9f3a941788a29989a1d233b0b46f0b77ff1f2e63e703dc964", "query": "We could either: where appropriate - least amount of code change but cumbersome OS version in the tag instead of the repository /cc\nThis issue is currently awaiting triage. If a SIG or subproject determines this is a relevant issue, they will accept it by applying the label and provide further guidance. The label can be by org members by writing in a comment. -t $(IMAGE_WITH_OS_ARCH):$(TAG) --build-arg BASE=${BASE} --build-arg ARCH=$(ARCH) . -t $(IMAGE):$(TAG)-${OS}-$(ARCH) --build-arg BASE=${BASE} --build-arg ARCH=$(ARCH) . touch $@ .container-windows-$(ARCH): $(foreach binary, ${BIN}, bin/${binary}-${OS}-${ARCH}) docker buildx build --pull --output=type=${OUTPUT_TYPE} --platform ${OS}/$(ARCH) -t $(IMAGE_WITH_OS_ARCH)-${OSVERSION}:$(TAG) --build-arg BASE=${BASE}:${OSVERSION} --build-arg ARCH=$(ARCH) -f Dockerfile_windows . -t $(IMAGE):$(TAG)-${OS}-$(ARCH)-${OSVERSION} --build-arg BASE=${BASE}:${OSVERSION} --build-arg ARCH=$(ARCH) -f Dockerfile_windows . touch $@ # Useful for testing, not automatically included in container image", "commid": "kubernetes_pr_97782"}], "negative_passages": []} {"query_id": "q-en-kubernetes-e13a3765f3b220183700c97220ce5674ec114ad3ff68bc827eafde8e290c4a4e", "query": "[ ] two weeks soak start date : testgrid-link [ ] two weeks soak end date : [ ] test promotion pr : kubernetes/kubernetes#? According to this APIsnoop query, there are still some remaining StatefulsetScale endpoints that are untested. with this query, you can filter untested endpoints by their category and eligibility for conformance. e.g below shows a query to find all conformance eligible untested, stable, apps endpoints sql-mode select distinct endpoint, right(useragent,73) AS useragent from testing.auditevent -- where useragent ilike '%subresource%' where endpoint ilike '%AppsV1NamespacedStatefulSetScale%' and releasedate::BIGINT round(((EXTRACT(EPOCH FROM NOW()))::numeric)1000,0) - and useragent like 'e2e%' order by endpoint limit 30; example endpoint | useragent -----------------------------------------|--------------------------------------------------------------------------- patchAppsV1NamespacedStatefulSetScale | [StatefulSetBasic] should have a working scale subresource [Conformance] readAppsV1NamespacedStatefulSetScale | [StatefulSetBasic] should have a working scale subresource [Conformance] replaceAppsV1NamespacedStatefulSetScale | [StatefulSetBasic] should have a working scale subresource [Conformance] (3 rows) If a test with these calls gets merged, test coverage will go up by 1 point* This test is also created with the goal of conformance promotion. /sig testing /sig architecture /area conformance\nThis issue is currently awaiting triage. If a SIG or subproject determines this is a relevant issue, they will accept it by applying the label and provide further guidance. The label can be by org members by writing in a comment. release: v1.16 release: v1.16, v1.21 file: test/e2e/apps/statefulset.go - testname: StatefulSet, Rolling Update with Partition codename: '[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]", "commid": "kubernetes_pr_98126"}], "negative_passages": []} {"query_id": "q-en-kubernetes-e13a3765f3b220183700c97220ce5674ec114ad3ff68bc827eafde8e290c4a4e", "query": "[ ] two weeks soak start date : testgrid-link [ ] two weeks soak end date : [ ] test promotion pr : kubernetes/kubernetes#? According to this APIsnoop query, there are still some remaining StatefulsetScale endpoints that are untested. with this query, you can filter untested endpoints by their category and eligibility for conformance. e.g below shows a query to find all conformance eligible untested, stable, apps endpoints sql-mode select distinct endpoint, right(useragent,73) AS useragent from testing.auditevent -- where useragent ilike '%subresource%' where endpoint ilike '%AppsV1NamespacedStatefulSetScale%' and releasedate::BIGINT round(((EXTRACT(EPOCH FROM NOW()))::numeric)1000,0) - and useragent like 'e2e%' order by endpoint limit 30; example endpoint | useragent -----------------------------------------|--------------------------------------------------------------------------- patchAppsV1NamespacedStatefulSetScale | [StatefulSetBasic] should have a working scale subresource [Conformance] readAppsV1NamespacedStatefulSetScale | [StatefulSetBasic] should have a working scale subresource [Conformance] replaceAppsV1NamespacedStatefulSetScale | [StatefulSetBasic] should have a working scale subresource [Conformance] (3 rows) If a test with these calls gets merged, test coverage will go up by 1 point* This test is also created with the goal of conformance promotion. /sig testing /sig architecture /area conformance\nThis issue is currently awaiting triage. If a SIG or subproject determines this is a relevant issue, they will accept it by applying the label and provide further guidance. The label can be by org members by writing in a comment. \"encoding/json\" \"fmt\" \"strings\" \"sync\"", "commid": "kubernetes_pr_98126"}], "negative_passages": []} {"query_id": "q-en-kubernetes-e13a3765f3b220183700c97220ce5674ec114ad3ff68bc827eafde8e290c4a4e", "query": "[ ] two weeks soak start date : testgrid-link [ ] two weeks soak end date : [ ] test promotion pr : kubernetes/kubernetes#? According to this APIsnoop query, there are still some remaining StatefulsetScale endpoints that are untested. with this query, you can filter untested endpoints by their category and eligibility for conformance. e.g below shows a query to find all conformance eligible untested, stable, apps endpoints sql-mode select distinct endpoint, right(useragent,73) AS useragent from testing.auditevent -- where useragent ilike '%subresource%' where endpoint ilike '%AppsV1NamespacedStatefulSetScale%' and releasedate::BIGINT round(((EXTRACT(EPOCH FROM NOW()))::numeric)1000,0) - and useragent like 'e2e%' order by endpoint limit 30; example endpoint | useragent -----------------------------------------|--------------------------------------------------------------------------- patchAppsV1NamespacedStatefulSetScale | [StatefulSetBasic] should have a working scale subresource [Conformance] readAppsV1NamespacedStatefulSetScale | [StatefulSetBasic] should have a working scale subresource [Conformance] replaceAppsV1NamespacedStatefulSetScale | [StatefulSetBasic] should have a working scale subresource [Conformance] (3 rows) If a test with these calls gets merged, test coverage will go up by 1 point* This test is also created with the goal of conformance promotion. /sig testing /sig architecture /area conformance\nThis issue is currently awaiting triage. If a SIG or subproject determines this is a relevant issue, they will accept it by applying the label and provide further guidance. The label can be by org members by writing in a comment. autoscalingv1 \"k8s.io/api/autoscaling/v1\" v1 \"k8s.io/api/core/v1\" metav1 \"k8s.io/apimachinery/pkg/apis/meta/v1\" \"k8s.io/apimachinery/pkg/fields\"", "commid": "kubernetes_pr_98126"}], "negative_passages": []} {"query_id": "q-en-kubernetes-e13a3765f3b220183700c97220ce5674ec114ad3ff68bc827eafde8e290c4a4e", "query": "[ ] two weeks soak start date : testgrid-link [ ] two weeks soak end date : [ ] test promotion pr : kubernetes/kubernetes#? According to this APIsnoop query, there are still some remaining StatefulsetScale endpoints that are untested. with this query, you can filter untested endpoints by their category and eligibility for conformance. e.g below shows a query to find all conformance eligible untested, stable, apps endpoints sql-mode select distinct endpoint, right(useragent,73) AS useragent from testing.auditevent -- where useragent ilike '%subresource%' where endpoint ilike '%AppsV1NamespacedStatefulSetScale%' and releasedate::BIGINT round(((EXTRACT(EPOCH FROM NOW()))::numeric)1000,0) - and useragent like 'e2e%' order by endpoint limit 30; example endpoint | useragent -----------------------------------------|--------------------------------------------------------------------------- patchAppsV1NamespacedStatefulSetScale | [StatefulSetBasic] should have a working scale subresource [Conformance] readAppsV1NamespacedStatefulSetScale | [StatefulSetBasic] should have a working scale subresource [Conformance] replaceAppsV1NamespacedStatefulSetScale | [StatefulSetBasic] should have a working scale subresource [Conformance] (3 rows) If a test with these calls gets merged, test coverage will go up by 1 point* This test is also created with the goal of conformance promotion. /sig testing /sig architecture /area conformance\nThis issue is currently awaiting triage. If a SIG or subproject determines this is a relevant issue, they will accept it by applying the label and provide further guidance. The label can be by org members by writing in a comment. Release: v1.16 Release: v1.16, v1.21 Testname: StatefulSet resource Replica scaling Description: Create a StatefulSet resource. Newly created StatefulSet resource MUST have a scale of one.", "commid": "kubernetes_pr_98126"}], "negative_passages": []} {"query_id": "q-en-kubernetes-e13a3765f3b220183700c97220ce5674ec114ad3ff68bc827eafde8e290c4a4e", "query": "[ ] two weeks soak start date : testgrid-link [ ] two weeks soak end date : [ ] test promotion pr : kubernetes/kubernetes#? According to this APIsnoop query, there are still some remaining StatefulsetScale endpoints that are untested. with this query, you can filter untested endpoints by their category and eligibility for conformance. e.g below shows a query to find all conformance eligible untested, stable, apps endpoints sql-mode select distinct endpoint, right(useragent,73) AS useragent from testing.auditevent -- where useragent ilike '%subresource%' where endpoint ilike '%AppsV1NamespacedStatefulSetScale%' and releasedate::BIGINT round(((EXTRACT(EPOCH FROM NOW()))::numeric)1000,0) - and useragent like 'e2e%' order by endpoint limit 30; example endpoint | useragent -----------------------------------------|--------------------------------------------------------------------------- patchAppsV1NamespacedStatefulSetScale | [StatefulSetBasic] should have a working scale subresource [Conformance] readAppsV1NamespacedStatefulSetScale | [StatefulSetBasic] should have a working scale subresource [Conformance] replaceAppsV1NamespacedStatefulSetScale | [StatefulSetBasic] should have a working scale subresource [Conformance] (3 rows) If a test with these calls gets merged, test coverage will go up by 1 point* This test is also created with the goal of conformance promotion. /sig testing /sig architecture /area conformance\nThis issue is currently awaiting triage. If a SIG or subproject determines this is a relevant issue, they will accept it by applying the label and provide further guidance. The label can be by org members by writing in a comment. ginkgo.By(\"Patch a scale subresource\") scale.ResourceVersion = \"\" // indicate the scale update should be unconditional scale.Spec.Replicas = 4 // should be 2 after \"UpdateScale\" operation, now Patch to 4 ssScalePatchPayload, err := json.Marshal(autoscalingv1.Scale{ Spec: autoscalingv1.ScaleSpec{ Replicas: scale.Spec.Replicas, }, }) framework.ExpectNoError(err, \"Could not Marshal JSON for patch payload\") _, err = c.AppsV1().StatefulSets(ns).Patch(context.TODO(), ssName, types.StrategicMergePatchType, []byte(ssScalePatchPayload), metav1.PatchOptions{}, \"scale\") framework.ExpectNoError(err, \"Failed to patch stateful set: %v\", err) ginkgo.By(\"verifying the statefulset Spec.Replicas was modified\") ss, err = c.AppsV1().StatefulSets(ns).Get(context.TODO(), ssName, metav1.GetOptions{}) framework.ExpectNoError(err, \"Failed to get statefulset resource: %v\", err) framework.ExpectEqual(*(ss.Spec.Replicas), int32(4), \"statefulset should have 4 replicas\") }) })", "commid": "kubernetes_pr_98126"}], "negative_passages": []} {"query_id": "q-en-kubernetes-444fa5b2edd377b73e5bc84b27cebb7f3765c55163f585dc8d441d2a1c256789", "query": "What happened: with --cpu-manager-policy=static kubelet will panic when pause container lose during kubelet restarting What you expected to happen: kubelet can start normally How to reproduce it (as minimally and precisely as possible): 1,config kubelet with --cpu-manager-policy=static 2, Run a large number of pods on the test node 3,stop kubelet use command: 4,kill and remove all the pause container like this: ! 5,start kubelet use command: and then kubelet maybe panic, log is : Anything else we need to know?: When we actually encountered this problem, only one pause container was lost. In order to reproduce this bug, I deleted all pause containers. Environment: Kubernetes version (use ): 1.20.1 Cloud provider or hardware configuration: x8664 OS (e.g: ):18.04.5 LTS (Bionic Beaver) Kernel (e.g. ): Linux ecs-linux-0008-43ee 4.15.0-123-generic -Ubuntu SMP Wed Oct 21 09:40:11 UTC 2020 x8664 x8664 x8664 GNU/Linux Install tools: Network plugin and version (if this is a network-related bug): Others:\n/sig node /assign\nIssues go stale after 90d of inactivity. Mark the issue as fresh with . Stale issues rot after an additional 30d of inactivity and eventually close. If this issue is safe to close now please do so with . Send feedback to sig-contributor-experience at . /lifecycle stale\nStale issues rot after 30d of inactivity. Mark the issue as fresh with . Rotten issues close after an additional 30d of inactivity. If this issue is safe to close now please do so with . Send feedback to sig-contributor-experience at . /lifecycle rotten\nRotten issues close after 30d of inactivity. Reopen the issue with . Mark the issue as fresh with . Send feedback to sig-contributor-experience at . /close\nClosing this issue. containerMap, err := buildContainerMapFromRuntime(runtimeService) if err != nil { return fmt.Errorf(\"failed to build map of initial containers from runtime: %v\", err) } err = cm.cpuManager.Start(cpumanager.ActivePodsFunc(activePods), sourcesReady, podStatusProvider, runtimeService, containerMap) containerMap := buildContainerMapFromRuntime(runtimeService) err := cm.cpuManager.Start(cpumanager.ActivePodsFunc(activePods), sourcesReady, podStatusProvider, runtimeService, containerMap) if err != nil { return fmt.Errorf(\"start cpu manager error: %v\", err) }", "commid": "kubernetes_pr_108325"}], "negative_passages": []} {"query_id": "q-en-kubernetes-444fa5b2edd377b73e5bc84b27cebb7f3765c55163f585dc8d441d2a1c256789", "query": "What happened: with --cpu-manager-policy=static kubelet will panic when pause container lose during kubelet restarting What you expected to happen: kubelet can start normally How to reproduce it (as minimally and precisely as possible): 1,config kubelet with --cpu-manager-policy=static 2, Run a large number of pods on the test node 3,stop kubelet use command: 4,kill and remove all the pause container like this: ! 5,start kubelet use command: and then kubelet maybe panic, log is : Anything else we need to know?: When we actually encountered this problem, only one pause container was lost. In order to reproduce this bug, I deleted all pause containers. Environment: Kubernetes version (use ): 1.20.1 Cloud provider or hardware configuration: x8664 OS (e.g: ):18.04.5 LTS (Bionic Beaver) Kernel (e.g. ): Linux ecs-linux-0008-43ee 4.15.0-123-generic -Ubuntu SMP Wed Oct 21 09:40:11 UTC 2020 x8664 x8664 x8664 GNU/Linux Install tools: Network plugin and version (if this is a network-related bug): Others:\n/sig node /assign\nIssues go stale after 90d of inactivity. Mark the issue as fresh with . Stale issues rot after an additional 30d of inactivity and eventually close. If this issue is safe to close now please do so with . Send feedback to sig-contributor-experience at . /lifecycle stale\nStale issues rot after 30d of inactivity. Mark the issue as fresh with . Rotten issues close after an additional 30d of inactivity. If this issue is safe to close now please do so with . Send feedback to sig-contributor-experience at . /lifecycle rotten\nRotten issues close after 30d of inactivity. Reopen the issue with . Mark the issue as fresh with . Send feedback to sig-contributor-experience at . /close\nClosing this issue. containerMap, err := buildContainerMapFromRuntime(runtimeService) if err != nil { return fmt.Errorf(\"failed to build map of initial containers from runtime: %v\", err) } err = cm.memoryManager.Start(memorymanager.ActivePodsFunc(activePods), sourcesReady, podStatusProvider, runtimeService, containerMap) containerMap := buildContainerMapFromRuntime(runtimeService) err := cm.memoryManager.Start(memorymanager.ActivePodsFunc(activePods), sourcesReady, podStatusProvider, runtimeService, containerMap) if err != nil { return fmt.Errorf(\"start memory manager error: %v\", err) }", "commid": "kubernetes_pr_108325"}], "negative_passages": []} {"query_id": "q-en-kubernetes-444fa5b2edd377b73e5bc84b27cebb7f3765c55163f585dc8d441d2a1c256789", "query": "What happened: with --cpu-manager-policy=static kubelet will panic when pause container lose during kubelet restarting What you expected to happen: kubelet can start normally How to reproduce it (as minimally and precisely as possible): 1,config kubelet with --cpu-manager-policy=static 2, Run a large number of pods on the test node 3,stop kubelet use command: 4,kill and remove all the pause container like this: ! 5,start kubelet use command: and then kubelet maybe panic, log is : Anything else we need to know?: When we actually encountered this problem, only one pause container was lost. In order to reproduce this bug, I deleted all pause containers. Environment: Kubernetes version (use ): 1.20.1 Cloud provider or hardware configuration: x8664 OS (e.g: ):18.04.5 LTS (Bionic Beaver) Kernel (e.g. ): Linux ecs-linux-0008-43ee 4.15.0-123-generic -Ubuntu SMP Wed Oct 21 09:40:11 UTC 2020 x8664 x8664 x8664 GNU/Linux Install tools: Network plugin and version (if this is a network-related bug): Others:\n/sig node /assign\nIssues go stale after 90d of inactivity. Mark the issue as fresh with . Stale issues rot after an additional 30d of inactivity and eventually close. If this issue is safe to close now please do so with . Send feedback to sig-contributor-experience at . /lifecycle stale\nStale issues rot after 30d of inactivity. Mark the issue as fresh with . Rotten issues close after an additional 30d of inactivity. If this issue is safe to close now please do so with . Send feedback to sig-contributor-experience at . /lifecycle rotten\nRotten issues close after 30d of inactivity. Reopen the issue with . Mark the issue as fresh with . Send feedback to sig-contributor-experience at . /close\nClosing this issue. func buildContainerMapFromRuntime(runtimeService internalapi.RuntimeService) (containermap.ContainerMap, error) { func buildContainerMapFromRuntime(runtimeService internalapi.RuntimeService) containermap.ContainerMap { podSandboxMap := make(map[string]string) podSandboxList, _ := runtimeService.ListPodSandbox(nil) for _, p := range podSandboxList {", "commid": "kubernetes_pr_108325"}], "negative_passages": []} {"query_id": "q-en-kubernetes-444fa5b2edd377b73e5bc84b27cebb7f3765c55163f585dc8d441d2a1c256789", "query": "What happened: with --cpu-manager-policy=static kubelet will panic when pause container lose during kubelet restarting What you expected to happen: kubelet can start normally How to reproduce it (as minimally and precisely as possible): 1,config kubelet with --cpu-manager-policy=static 2, Run a large number of pods on the test node 3,stop kubelet use command: 4,kill and remove all the pause container like this: ! 5,start kubelet use command: and then kubelet maybe panic, log is : Anything else we need to know?: When we actually encountered this problem, only one pause container was lost. In order to reproduce this bug, I deleted all pause containers. Environment: Kubernetes version (use ): 1.20.1 Cloud provider or hardware configuration: x8664 OS (e.g: ):18.04.5 LTS (Bionic Beaver) Kernel (e.g. ): Linux ecs-linux-0008-43ee 4.15.0-123-generic -Ubuntu SMP Wed Oct 21 09:40:11 UTC 2020 x8664 x8664 x8664 GNU/Linux Install tools: Network plugin and version (if this is a network-related bug): Others:\n/sig node /assign\nIssues go stale after 90d of inactivity. Mark the issue as fresh with . Stale issues rot after an additional 30d of inactivity and eventually close. If this issue is safe to close now please do so with . Send feedback to sig-contributor-experience at . /lifecycle stale\nStale issues rot after 30d of inactivity. Mark the issue as fresh with . Rotten issues close after an additional 30d of inactivity. If this issue is safe to close now please do so with . Send feedback to sig-contributor-experience at . /lifecycle rotten\nRotten issues close after 30d of inactivity. Reopen the issue with . Mark the issue as fresh with . Send feedback to sig-contributor-experience at . /close\nClosing this issue. return nil, fmt.Errorf(\"no PodsandBox found with Id '%s' for container with ID '%s' and Name '%s'\", c.PodSandboxId, c.Id, c.Metadata.Name) klog.InfoS(\"no PodSandBox found for the container\", \"podSandboxId\", c.PodSandboxId, \"containerName\", c.Metadata.Name, \"containerId\", c.Id) continue } containerMap.Add(podSandboxMap[c.PodSandboxId], c.Metadata.Name, c.Id) } return containerMap, nil return containerMap } func isProcessRunningInHost(pid int) (bool, error) {", "commid": "kubernetes_pr_108325"}], "negative_passages": []} {"query_id": "q-en-kubernetes-ee0826757468ea334f95c77509b257c3d392b6713b4311ba7afa125d642dcec9", "query": "The based container images for are shipped with the wrong architecture within the manifest if not amd64: Refers to We updated the images a few minutes ago to buster-v1.3.0, which are now ready to be used as base for the multi-architecture images. If the images contain the right architectures, then it may magically happen that the kube-proxy images are fixed, too. This is because we already use the build syntax for it: /cc\nThis issue is currently awaiting triage. If a SIG or subproject determines this is a relevant issue, they will accept it by applying the label and provide further guidance. The label can be by org members by writing in a comment. local debian_iptables_version=buster-v1.4.0 local debian_iptables_version=buster-v1.5.0 local go_runner_version=buster-v2.2.4 ### If you change any of these lists, please also update DOCKERIZED_BINARIES ### in build/BUILD. And kube::golang::server_image_targets", "commid": "kubernetes_pr_98526"}], "negative_passages": []} {"query_id": "q-en-kubernetes-ee0826757468ea334f95c77509b257c3d392b6713b4311ba7afa125d642dcec9", "query": "The based container images for are shipped with the wrong architecture within the manifest if not amd64: Refers to We updated the images a few minutes ago to buster-v1.3.0, which are now ready to be used as base for the multi-architecture images. If the images contain the right architectures, then it may magically happen that the kube-proxy images are fixed, too. This is because we already use the build syntax for it: /cc\nThis issue is currently awaiting triage. If a SIG or subproject determines this is a relevant issue, they will accept it by applying the label and provide further guidance. The label can be by org members by writing in a comment. version: buster-v1.3.0 version: buster-v1.4.0 refPaths: - path: build/workspace.bzl match: tag =", "commid": "kubernetes_pr_98526"}], "negative_passages": []} {"query_id": "q-en-kubernetes-ee0826757468ea334f95c77509b257c3d392b6713b4311ba7afa125d642dcec9", "query": "The based container images for are shipped with the wrong architecture within the manifest if not amd64: Refers to We updated the images a few minutes ago to buster-v1.3.0, which are now ready to be used as base for the multi-architecture images. If the images contain the right architectures, then it may magically happen that the kube-proxy images are fixed, too. This is because we already use the build syntax for it: /cc\nThis issue is currently awaiting triage. If a SIG or subproject determines this is a relevant issue, they will accept it by applying the label and provide further guidance. The label can be by org members by writing in a comment. version: buster-v1.4.0 version: buster-v1.5.0 refPaths: - path: build/common.sh match: debian_iptables_version=", "commid": "kubernetes_pr_98526"}], "negative_passages": []} {"query_id": "q-en-kubernetes-ee0826757468ea334f95c77509b257c3d392b6713b4311ba7afa125d642dcec9", "query": "The based container images for are shipped with the wrong architecture within the manifest if not amd64: Refers to We updated the images a few minutes ago to buster-v1.3.0, which are now ready to be used as base for the multi-architecture images. If the images contain the right architectures, then it may magically happen that the kube-proxy images are fixed, too. This is because we already use the build syntax for it: /cc\nThis issue is currently awaiting triage. If a SIG or subproject determines this is a relevant issue, they will accept it by applying the label and provide further guidance. The label can be by org members by writing in a comment. # Manifest: skopeo inspect docker://gcr.io/k8s-staging-build-image/debian-base:buster-v1.3.0 # Arches: skopeo inspect --raw docker://gcr.io/k8s-staging-build-image/debian-base:buster-v1.3.0 # Manifest: skopeo inspect docker://gcr.io/k8s-staging-build-image/debian-base:buster-v1.4.0 # Arches: skopeo inspect --raw docker://gcr.io/k8s-staging-build-image/debian-base:buster-v1.4.0 _DEBIAN_BASE_DIGEST = { \"manifest\": \"sha256:d66137c7c362d1026dca670d1ff4c25e5b0770e8ace87ac3d008d52e4b0db338\", \"amd64\": \"sha256:a5ab028d9a730b78af9abb15b5db9b2e6f82448ab269d6f3a07d1834c571ccc6\", \"arm\": \"sha256:94e611363760607366ca1fed9375105b6c5fc922ab1249869b708690ca13733c\", \"arm64\": \"sha256:83512c52d44587271cd0f355c0a9a7e6c2412ddc66b8a8eb98f994277297a72f\", \"ppc64le\": \"sha256:9c8284b2797b114ebe8f3f1b2b5817a9c7f07f3f82513c49a30e6191a1acc1fc\", \"s390x\": \"sha256:d617637dd4df0bc1cfa524fae3b4892cfe57f7fec9402ad8dfa28e38e82ec688\", \"manifest\": \"sha256:36652ef8e4dd6715de02e9b68e5c122ed8ee06c75f83f5c574b97301e794c3fb\", \"amd64\": \"sha256:afff10fcd513483e492807f8d934bdf0be4a237997f55e0f1f8e34c04a6cb213\", \"arm\": \"sha256:27e6e66ea3c4c4ca6dbfc8c949f0c4c870f038f4500fd267c242422a244f233c\", \"arm64\": \"sha256:4333a5edc9ce6d6660c76104749c2e50e6158e57c8e5956f732991bb032a8ce1\", \"ppc64le\": \"sha256:01a0ba2645883ea8d985460c2913070a90a098056cc6d188122942678923ddb7\", \"s390x\": \"sha256:610526b047d4b528d9e14b4f15347aa4e37af0c47e1307a2f7aebf8745c8a323\", } # Use skopeo to find these values: https://github.com/containers/skopeo # # Example # Manifest: skopeo inspect docker://gcr.io/k8s-staging-build-image/debian-iptables:buster-v1.4.0 # Arches: skopeo inspect --raw docker://gcr.io/k8s-staging-build-image/debian-iptables:buster-v1.4.0 # Manifest: skopeo inspect docker://gcr.io/k8s-staging-build-image/debian-iptables:buster-v1.5.0 # Arches: skopeo inspect --raw docker://gcr.io/k8s-staging-build-image/debian-iptables:buster-v1.5.0 _DEBIAN_IPTABLES_DIGEST = { \"manifest\": \"sha256:87f97cf2b62eb107871ee810f204ccde41affb70b29883aa898e93df85dea0f0\", \"amd64\": \"sha256:da837f39cf3af78adb796c0caa9733449ae99e51cf624590c328e4c9951ace7a\", \"arm\": \"sha256:bb6677337a4dbc3e578a3e87642d99be740dea391dc5e8987f04211c5e23abcd\", \"arm64\": \"sha256:6ad4717d69db2cc47bc2efc91cebb96ba736be1de49e62e0deffdbaf0fa2318c\", \"ppc64le\": \"sha256:168ccfeb861239536826a26da24ab5f68bb5349d7439424b7008b01e8f6534fc\", \"s390x\": \"sha256:5a88d4f4c29bac5b5c93195059b928f7346be11d0f0f7f6da0e14c0bfdbd1362\", \"manifest\": \"sha256:abe8cef9e116f2d5ec1175c386e33841ff3386779138b425af876384b0fd7ccb\", \"amd64\": \"sha256:b4b8b1e0d4617011dd03f20b804cc2e50bf48bafc36b1c8c7bd23fd44bfd641e\", \"arm\": \"sha256:09f79b3a00268705a8f8462f1528fed536e204905359f21e9965f08dd306c60a\", \"arm64\": \"sha256:b4fa11965f34a9f668c424b401c0af22e88f600d22c899699bdb0bd1e6953ad6\", \"ppc64le\": \"sha256:0ea0be4dec281b506f6ceef4cb3594cabea8d80e2dc0d93c7eb09d46259dd837\", \"s390x\": \"sha256:50ef25fba428b6002ef0a9dea7ceae5045430dc1035d50498a478eefccba17f5\", } # Use skopeo to find these values: https://github.com/containers/skopeo", "commid": "kubernetes_pr_98526"}], "negative_passages": []} {"query_id": "q-en-kubernetes-ee0826757468ea334f95c77509b257c3d392b6713b4311ba7afa125d642dcec9", "query": "The based container images for are shipped with the wrong architecture within the manifest if not amd64: Refers to We updated the images a few minutes ago to buster-v1.3.0, which are now ready to be used as base for the multi-architecture images. If the images contain the right architectures, then it may magically happen that the kube-proxy images are fixed, too. This is because we already use the build syntax for it: /cc\nThis issue is currently awaiting triage. If a SIG or subproject determines this is a relevant issue, they will accept it by applying the label and provide further guidance. The label can be by org members by writing in a comment. tag = \"buster-v1.3.0\", # ignored, but kept here for documentation tag = \"buster-v1.4.0\", # ignored, but kept here for documentation ) container_pull(", "commid": "kubernetes_pr_98526"}], "negative_passages": []} {"query_id": "q-en-kubernetes-ee0826757468ea334f95c77509b257c3d392b6713b4311ba7afa125d642dcec9", "query": "The based container images for are shipped with the wrong architecture within the manifest if not amd64: Refers to We updated the images a few minutes ago to buster-v1.3.0, which are now ready to be used as base for the multi-architecture images. If the images contain the right architectures, then it may magically happen that the kube-proxy images are fixed, too. This is because we already use the build syntax for it: /cc\nThis issue is currently awaiting triage. If a SIG or subproject determines this is a relevant issue, they will accept it by applying the label and provide further guidance. The label can be by org members by writing in a comment. tag = \"buster-v1.4.0\", # ignored, but kept here for documentation tag = \"buster-v1.5.0\", # ignored, but kept here for documentation ) def etcd_tarballs():", "commid": "kubernetes_pr_98526"}], "negative_passages": []} {"query_id": "q-en-kubernetes-ee0826757468ea334f95c77509b257c3d392b6713b4311ba7afa125d642dcec9", "query": "The based container images for are shipped with the wrong architecture within the manifest if not amd64: Refers to We updated the images a few minutes ago to buster-v1.3.0, which are now ready to be used as base for the multi-architecture images. If the images contain the right architectures, then it may magically happen that the kube-proxy images are fixed, too. This is because we already use the build syntax for it: /cc\nThis issue is currently awaiting triage. If a SIG or subproject determines this is a relevant issue, they will accept it by applying the label and provide further guidance. The label can be by org members by writing in a comment. REVISION?=2 REVISION?=3 # IMAGE_TAG Uniquely identifies k8s.gcr.io/etcd docker image with a tag of the form \"-\". IMAGE_TAG=$(LATEST_ETCD_VERSION)-$(REVISION)", "commid": "kubernetes_pr_98526"}], "negative_passages": []} {"query_id": "q-en-kubernetes-ee0826757468ea334f95c77509b257c3d392b6713b4311ba7afa125d642dcec9", "query": "The based container images for are shipped with the wrong architecture within the manifest if not amd64: Refers to We updated the images a few minutes ago to buster-v1.3.0, which are now ready to be used as base for the multi-architecture images. If the images contain the right architectures, then it may magically happen that the kube-proxy images are fixed, too. This is because we already use the build syntax for it: /cc\nThis issue is currently awaiting triage. If a SIG or subproject determines this is a relevant issue, they will accept it by applying the label and provide further guidance. The label can be by org members by writing in a comment. BASEIMAGE?=k8s.gcr.io/build-image/debian-base:buster-v1.3.0 BASEIMAGE?=k8s.gcr.io/build-image/debian-base:buster-v1.4.0 endif ifeq ($(ARCH),arm) BASEIMAGE?=k8s.gcr.io/build-image/debian-base-arm:buster-v1.3.0 BASEIMAGE?=k8s.gcr.io/build-image/debian-base-arm:buster-v1.4.0 endif ifeq ($(ARCH),arm64) BASEIMAGE?=k8s.gcr.io/build-image/debian-base-arm64:buster-v1.3.0 BASEIMAGE?=k8s.gcr.io/build-image/debian-base-arm64:buster-v1.4.0 endif ifeq ($(ARCH),ppc64le) BASEIMAGE?=k8s.gcr.io/build-image/debian-base-ppc64le:buster-v1.3.0 BASEIMAGE?=k8s.gcr.io/build-image/debian-base-ppc64le:buster-v1.4.0 endif ifeq ($(ARCH),s390x) BASEIMAGE?=k8s.gcr.io/build-image/debian-base-s390x:buster-v1.3.0 BASEIMAGE?=k8s.gcr.io/build-image/debian-base-s390x:buster-v1.4.0 endif RUNNERIMAGE?=gcr.io/distroless/static:latest", "commid": "kubernetes_pr_98526"}], "negative_passages": []} {"query_id": "q-en-kubernetes-ee0826757468ea334f95c77509b257c3d392b6713b4311ba7afa125d642dcec9", "query": "The based container images for are shipped with the wrong architecture within the manifest if not amd64: Refers to We updated the images a few minutes ago to buster-v1.3.0, which are now ready to be used as base for the multi-architecture images. If the images contain the right architectures, then it may magically happen that the kube-proxy images are fixed, too. This is because we already use the build syntax for it: /cc\nThis issue is currently awaiting triage. If a SIG or subproject determines this is a relevant issue, they will accept it by applying the label and provide further guidance. The label can be by org members by writing in a comment. configs[DebianIptables] = Config{buildImageRegistry, \"debian-iptables\", \"buster-v1.4.0\"} configs[DebianIptables] = Config{buildImageRegistry, \"debian-iptables\", \"buster-v1.5.0\"} configs[EchoServer] = Config{e2eRegistry, \"echoserver\", \"2.2\"} configs[Etcd] = Config{gcRegistry, \"etcd\", \"3.4.13-0\"} configs[GlusterDynamicProvisioner] = Config{dockerGluster, \"glusterdynamic-provisioner\", \"v1.0\"}", "commid": "kubernetes_pr_98526"}], "negative_passages": []} {"query_id": "q-en-kubernetes-3444fabb883927bdace65eb5d9db61f45e39011ade9a032d43fca15b1006c909", "query": "What happened: The current in-tree PV topology is still using beta version label \"failure-\". This was intentionally not upgrade because of CSI migration. However, since the CSI migration timeline is likely to slip, and with the fix of , it should be safe to update the in-tree PVC to GA version () So that we can unblock removing the beta label from node. What you expected to happen: Replace all beta topology label to GA version once merged. /sig storage /cc\n/assign\nGitHub didn't allow me to assign the following users: kassarl. Note that only , repo collaborators and people who have commented on this issue/PR can be assigned. Additionally, issues/PRs can only have 10 assignees at the same time. For more information please see if k == v1.LabelFailureDomainBetaZone { if k == v1.LabelTopologyZone || k == v1.LabelFailureDomainBetaZone { values, err = volumehelpers.LabelZonesToList(v) if err != nil { return nil, fmt.Errorf(\"failed to convert label string for Zone: %s to a List: %v\", v, err)", "commid": "kubernetes_pr_102414"}], "negative_passages": []} {"query_id": "q-en-kubernetes-3444fabb883927bdace65eb5d9db61f45e39011ade9a032d43fca15b1006c909", "query": "What happened: The current in-tree PV topology is still using beta version label \"failure-\". This was intentionally not upgrade because of CSI migration. However, since the CSI migration timeline is likely to slip, and with the fix of , it should be safe to update the in-tree PVC to GA version () So that we can unblock removing the beta label from node. What you expected to happen: Replace all beta topology label to GA version once merged. /sig storage /cc\n/assign\nGitHub didn't allow me to assign the following users: kassarl. Note that only , repo collaborators and people who have commented on this issue/PR can be assigned. Additionally, issues/PRs can only have 10 assignees at the same time. For more information please see Key: v1.LabelTopologyZone, Values: []string{\"zone-a\"}, }, { Key: v1.LabelTopologyRegion, Values: []string{\"region-a\"}, }, }} topologySelectorTermWithBetaLabels := v1.TopologySelectorTerm{[]v1.TopologySelectorLabelRequirement{ { Key: v1.LabelFailureDomainBetaZone, Values: []string{\"zone-a\"}, },", "commid": "kubernetes_pr_102414"}], "negative_passages": []} {"query_id": "q-en-kubernetes-3444fabb883927bdace65eb5d9db61f45e39011ade9a032d43fca15b1006c909", "query": "What happened: The current in-tree PV topology is still using beta version label \"failure-\". This was intentionally not upgrade because of CSI migration. However, since the CSI migration timeline is likely to slip, and with the fix of , it should be safe to update the in-tree PVC to GA version () So that we can unblock removing the beta label from node. What you expected to happen: Replace all beta topology label to GA version once merged. /sig storage /cc\n/assign\nGitHub didn't allow me to assign the following users: kassarl. Note that only , repo collaborators and people who have commented on this issue/PR can be assigned. Additionally, issues/PRs can only have 10 assignees at the same time. For more information please see name: \"translate with storagepolicyname and allowedTopology beta labels\", sc: NewStorageClass(map[string]string{\"storagepolicyname\": \"test-policy-name\"}, []v1.TopologySelectorTerm{topologySelectorTermWithBetaLabels}), expSc: NewStorageClass(map[string]string{\"storagepolicyname\": \"test-policy-name\", paramcsiMigration: \"true\"}, []v1.TopologySelectorTerm{topologySelectorTermWithBetaLabels}), }, { name: \"translate with raw vSAN policy parameters, datastore and diskformat\", sc: NewStorageClass(map[string]string{\"hostfailurestotolerate\": \"2\", \"datastore\": \"vsanDatastore\", \"diskformat\": \"thin\"}, []v1.TopologySelectorTerm{topologySelectorTerm}), expSc: NewStorageClass(map[string]string{\"hostfailurestotolerate-migrationparam\": \"2\", \"datastore-migrationparam\": \"vsanDatastore\", \"diskformat-migrationparam\": \"thin\", paramcsiMigration: \"true\"}, []v1.TopologySelectorTerm{topologySelectorTerm}),", "commid": "kubernetes_pr_102414"}], "negative_passages": []} {"query_id": "q-en-kubernetes-3444fabb883927bdace65eb5d9db61f45e39011ade9a032d43fca15b1006c909", "query": "What happened: The current in-tree PV topology is still using beta version label \"failure-\". This was intentionally not upgrade because of CSI migration. However, since the CSI migration timeline is likely to slip, and with the fix of , it should be safe to update the in-tree PVC to GA version () So that we can unblock removing the beta label from node. What you expected to happen: Replace all beta topology label to GA version once merged. /sig storage /cc\n/assign\nGitHub didn't allow me to assign the following users: kassarl. Note that only , repo collaborators and people who have commented on this issue/PR can be assigned. Additionally, issues/PRs can only have 10 assignees at the same time. For more information please see nodeFd := node.ObjectMeta.Labels[v1.LabelFailureDomainBetaZone] nodeRegion := node.ObjectMeta.Labels[v1.LabelFailureDomainBetaRegion] nodeFd := node.ObjectMeta.Labels[v1.LabelTopologyZone] nodeRegion := node.ObjectMeta.Labels[v1.LabelTopologyRegion] nodeZone := &cloudprovider.Zone{FailureDomain: nodeFd, Region: nodeRegion} nodeInfo := &NodeInfo{dataCenter: res.datacenter, vm: vm, vcServer: res.vc, vmUUID: nodeUUID, zone: nodeZone} nm.addNodeInfo(node.ObjectMeta.Name, nodeInfo)", "commid": "kubernetes_pr_102414"}], "negative_passages": []} {"query_id": "q-en-kubernetes-3444fabb883927bdace65eb5d9db61f45e39011ade9a032d43fca15b1006c909", "query": "What happened: The current in-tree PV topology is still using beta version label \"failure-\". This was intentionally not upgrade because of CSI migration. However, since the CSI migration timeline is likely to slip, and with the fix of , it should be safe to update the in-tree PVC to GA version () So that we can unblock removing the beta label from node. What you expected to happen: Replace all beta topology label to GA version once merged. /sig storage /cc\n/assign\nGitHub didn't allow me to assign the following users: kassarl. Note that only , repo collaborators and people who have commented on this issue/PR can be assigned. Additionally, issues/PRs can only have 10 assignees at the same time. For more information please see 0 { labels[v1.LabelFailureDomainBetaRegion] = dsZones[0].Region labels[v1.LabelFailureDomainBetaZone] = dsZones[0].FailureDomain labels[v1.LabelTopologyRegion] = dsZones[0].Region labels[v1.LabelTopologyZone] = dsZones[0].FailureDomain } return labels, nil }", "commid": "kubernetes_pr_102414"}], "negative_passages": []} {"query_id": "q-en-kubernetes-3444fabb883927bdace65eb5d9db61f45e39011ade9a032d43fca15b1006c909", "query": "What happened: The current in-tree PV topology is still using beta version label \"failure-\". This was intentionally not upgrade because of CSI migration. However, since the CSI migration timeline is likely to slip, and with the fix of , it should be safe to update the in-tree PVC to GA version () So that we can unblock removing the beta label from node. What you expected to happen: Replace all beta topology label to GA version once merged. /sig storage /cc\n/assign\nGitHub didn't allow me to assign the following users: kassarl. Note that only , repo collaborators and people who have commented on this issue/PR can be assigned. Additionally, issues/PRs can only have 10 assignees at the same time. For more information please see Key: v1.LabelFailureDomainBetaZone, Key: v1.LabelTopologyZone, Values: zones, }, },", "commid": "kubernetes_pr_102414"}], "negative_passages": []} {"query_id": "q-en-kubernetes-3444fabb883927bdace65eb5d9db61f45e39011ade9a032d43fca15b1006c909", "query": "What happened: The current in-tree PV topology is still using beta version label \"failure-\". This was intentionally not upgrade because of CSI migration. However, since the CSI migration timeline is likely to slip, and with the fix of , it should be safe to update the in-tree PVC to GA version () So that we can unblock removing the beta label from node. What you expected to happen: Replace all beta topology label to GA version once merged. /sig storage /cc\n/assign\nGitHub didn't allow me to assign the following users: kassarl. Note that only , repo collaborators and people who have commented on this issue/PR can be assigned. Additionally, issues/PRs can only have 10 assignees at the same time. For more information please see v1.LabelFailureDomainBetaZone: zoneB, v1.LabelTopologyZone: zoneB, } verifyPodSchedulingFails(client, namespace, nodeSelectorMap, scParameters, zones, storagev1.VolumeBindingWaitForFirstConsumer) })", "commid": "kubernetes_pr_102414"}], "negative_passages": []} {"query_id": "q-en-kubernetes-3444fabb883927bdace65eb5d9db61f45e39011ade9a032d43fca15b1006c909", "query": "What happened: The current in-tree PV topology is still using beta version label \"failure-\". This was intentionally not upgrade because of CSI migration. However, since the CSI migration timeline is likely to slip, and with the fix of , it should be safe to update the in-tree PVC to GA version () So that we can unblock removing the beta label from node. What you expected to happen: Replace all beta topology label to GA version once merged. /sig storage /cc\n/assign\nGitHub didn't allow me to assign the following users: kassarl. Note that only , repo collaborators and people who have commented on this issue/PR can be assigned. Additionally, issues/PRs can only have 10 assignees at the same time. For more information please see pvZoneLabels := strings.Split(pv.ObjectMeta.Labels[\"failure-domain.beta.kubernetes.io/zone\"], \"__\") pvZoneLabels := strings.Split(pv.ObjectMeta.Labels[v1.LabelTopologyZone], \"__\") for _, zone := range zones { gomega.Expect(pvZoneLabels).Should(gomega.ContainElement(zone), \"Incorrect or missing zone labels in pv.\") }", "commid": "kubernetes_pr_102414"}], "negative_passages": []} {"query_id": "q-en-kubernetes-c41f05d8910be5009b755790528ec94d72d551b527b45c1ed91bde3c53341f9a", "query": " %v:%v (nodeIP) and getting ZERO host endpoints\", config.NodeIP, config.NodeIP, config.NodeHTTPPort)) err = config.DialFromNode(\"http\", config.NodeIP, config.NodeHTTPPort, config.MaxTries, config.MaxTries, sets.NewString()) // #106770 MaxTries can be very large on large clusters, with the risk that a new NodePort is created by another test and start to answer traffic. // Since we only want to assert that traffic is not being forwarded anymore and the retry timeout is 2 seconds, consider the test is correct // if the service doesn't answer after 10 tries. err = config.DialFromNode(\"http\", config.NodeIP, config.NodeHTTPPort, 10, 10, sets.NewString()) if err != nil { framework.Failf(\"Error dialing http from node: %v\", err) framework.Failf(\"Failure validating that node port service STOPPED removed properly: %v\", err) } })", "commid": "kubernetes_pr_106990"}], "negative_passages": []} {"query_id": "q-en-kubernetes-b00d4aab62f403b854c1a0aa3c8ac8b12f2203f1df9b02fc2cc1a8c8fa361c58", "query": "gce-master-scale-correctness Kubernetes e2e suite.[sig-network] Networking Granular Checks: Services should update nodePort: udp [Slow] ci-kubernetes-e2e-gce-scale-correctness.Overall kubetest.Test Kubernetes e2e suite.[sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly] Kubernetes e2e suite.[sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support two pods which have the same volume definition Kubernetes e2e suite.[sig-network] Services should be able to update service type to NodePort listening on same port number but different protocols 12/1/2021 17:36:39 ET No response No response /sig storage /sig network\nThis issue is currently awaiting triage. If a SIG or subproject determines this is a relevant issue, they will accept it by applying the label and provide further guidance. The label can be by org members by writing in a comment. %v:%v (nodeIP) and getting ZERO host endpoints\", config.NodeIP, config.NodeIP, config.NodeUDPPort)) err = config.DialFromNode(\"udp\", config.NodeIP, config.NodeUDPPort, config.MaxTries, config.MaxTries, sets.NewString()) // #106770 MaxTries can be very large on large clusters, with the risk that a new NodePort is created by another test and start to answer traffic. // Since we only want to assert that traffic is not being forwarded anymore and the retry timeout is 2 seconds, consider the test is correct // if the service doesn't answer after 10 tries. err = config.DialFromNode(\"udp\", config.NodeIP, config.NodeUDPPort, 10, 10, sets.NewString()) if err != nil { framework.Failf(\"Failure validating that node port service STOPPED removed properly: %v\", err) }", "commid": "kubernetes_pr_106990"}], "negative_passages": []} {"query_id": "q-en-kubernetes-4a058229eb86de6555a991f0b180b07c42b35b29d04a1ae7b1d2febd668ac9b4", "query": "During a Kubernetes deployment the connectivity to web services from Windows nodes is flapping. There are intermittent traffic delays and some errors observed on web services during the deployment. Post deployment the flapping is no longer observed. No flapping in connectivity during the deployment. Create a deployment with multiple services and monitor connectivity. No response hnsLoadBalancer, err := hns.getLoadBalancer( nodePortEndpoints, loadBalancerFlags{isDSR: svcInfo.localTrafficDSR, localRoutedVIP: true, sessionAffinity: sessionAffinityClientIP, isIPv6: proxier.isIPv6Mode}, sourceVip, \"\", Enum(svcInfo.Protocol()), uint16(svcInfo.targetPort), uint16(svcInfo.NodePort()), ) if err != nil { klog.ErrorS(err, \"Policy creation failed\") continue } svcInfo.nodePorthnsID = hnsLoadBalancer.hnsID klog.V(3).InfoS(\"Hns LoadBalancer resource created for nodePort resources\", \"clusterIP\", svcInfo.ClusterIP(), \"hnsID\", hnsLoadBalancer.hnsID) if len(nodePortEndpoints) > 0 { hnsLoadBalancer, err := hns.getLoadBalancer( nodePortEndpoints, loadBalancerFlags{isDSR: svcInfo.localTrafficDSR, localRoutedVIP: true, sessionAffinity: sessionAffinityClientIP, isIPv6: proxier.isIPv6Mode}, sourceVip, \"\", Enum(svcInfo.Protocol()), uint16(svcInfo.targetPort), uint16(svcInfo.NodePort()), ) if err != nil { klog.ErrorS(err, \"Policy creation failed\") continue } svcInfo.nodePorthnsID = hnsLoadBalancer.hnsID klog.V(3).InfoS(\"Hns LoadBalancer resource created for nodePort resources\", \"clusterIP\", svcInfo.ClusterIP(), \"nodeport\", svcInfo.NodePort(), \"hnsID\", hnsLoadBalancer.hnsID) } else { klog.V(3).InfoS(\"Skipped creating Hns LoadBalancer for nodePort resources\", \"clusterIP\", svcInfo.ClusterIP(), \"nodeport\", svcInfo.NodePort(), \"hnsID\", hnsLoadBalancer.hnsID) } } // Create a Load Balancer Policy for each external IP", "commid": "kubernetes_pr_106936"}], "negative_passages": []} {"query_id": "q-en-kubernetes-4a058229eb86de6555a991f0b180b07c42b35b29d04a1ae7b1d2febd668ac9b4", "query": "During a Kubernetes deployment the connectivity to web services from Windows nodes is flapping. There are intermittent traffic delays and some errors observed on web services during the deployment. Post deployment the flapping is no longer observed. No flapping in connectivity during the deployment. Create a deployment with multiple services and monitor connectivity. No response // Try loading existing policies, if already available hnsLoadBalancer, err = hns.getLoadBalancer( externalIPEndpoints, loadBalancerFlags{isDSR: svcInfo.localTrafficDSR, sessionAffinity: sessionAffinityClientIP, isIPv6: proxier.isIPv6Mode}, sourceVip, externalIP.ip, Enum(svcInfo.Protocol()), uint16(svcInfo.targetPort), uint16(svcInfo.Port()), ) if err != nil { klog.ErrorS(err, \"Policy creation failed\") continue if len(externalIPEndpoints) > 0 { // Try loading existing policies, if already available hnsLoadBalancer, err = hns.getLoadBalancer( externalIPEndpoints, loadBalancerFlags{isDSR: svcInfo.localTrafficDSR, sessionAffinity: sessionAffinityClientIP, isIPv6: proxier.isIPv6Mode}, sourceVip, externalIP.ip, Enum(svcInfo.Protocol()), uint16(svcInfo.targetPort), uint16(svcInfo.Port()), ) if err != nil { klog.ErrorS(err, \"Policy creation failed\") continue } externalIP.hnsID = hnsLoadBalancer.hnsID klog.V(3).InfoS(\"Hns LoadBalancer resource created for externalIP resources\", \"externalIP\", externalIP, \"hnsID\", hnsLoadBalancer.hnsID) } else { klog.V(3).InfoS(\"Skipped creating Hns LoadBalancer for externalIP resources\", \"externalIP\", externalIP, \"hnsID\", hnsLoadBalancer.hnsID) } externalIP.hnsID = hnsLoadBalancer.hnsID klog.V(3).InfoS(\"Hns LoadBalancer resource created for externalIP resources\", \"externalIP\", externalIP, \"hnsID\", hnsLoadBalancer.hnsID) } // Create a Load Balancer Policy for each loadbalancer ingress for _, lbIngressIP := range svcInfo.loadBalancerIngressIPs {", "commid": "kubernetes_pr_106936"}], "negative_passages": []} {"query_id": "q-en-kubernetes-4a058229eb86de6555a991f0b180b07c42b35b29d04a1ae7b1d2febd668ac9b4", "query": "During a Kubernetes deployment the connectivity to web services from Windows nodes is flapping. There are intermittent traffic delays and some errors observed on web services during the deployment. Post deployment the flapping is no longer observed. No flapping in connectivity during the deployment. Create a deployment with multiple services and monitor connectivity. No response hnsLoadBalancer, err := hns.getLoadBalancer( lbIngressEndpoints, loadBalancerFlags{isDSR: svcInfo.preserveDIP || svcInfo.localTrafficDSR, useMUX: svcInfo.preserveDIP, preserveDIP: svcInfo.preserveDIP, sessionAffinity: sessionAffinityClientIP, isIPv6: proxier.isIPv6Mode}, sourceVip, lbIngressIP.ip, Enum(svcInfo.Protocol()), uint16(svcInfo.targetPort), uint16(svcInfo.Port()), ) if err != nil { klog.ErrorS(err, \"Policy creation failed\") continue if len(lbIngressEndpoints) > 0 { hnsLoadBalancer, err := hns.getLoadBalancer( lbIngressEndpoints, loadBalancerFlags{isDSR: svcInfo.preserveDIP || svcInfo.localTrafficDSR, useMUX: svcInfo.preserveDIP, preserveDIP: svcInfo.preserveDIP, sessionAffinity: sessionAffinityClientIP, isIPv6: proxier.isIPv6Mode}, sourceVip, lbIngressIP.ip, Enum(svcInfo.Protocol()), uint16(svcInfo.targetPort), uint16(svcInfo.Port()), ) if err != nil { klog.ErrorS(err, \"Policy creation failed\") continue } lbIngressIP.hnsID = hnsLoadBalancer.hnsID klog.V(3).InfoS(\"Hns LoadBalancer resource created for loadBalancer Ingress resources\", \"lbIngressIP\", lbIngressIP) } else { klog.V(3).InfoS(\"Skipped creating Hns LoadBalancer for loadBalancer Ingress resources\", \"lbIngressIP\", lbIngressIP) } lbIngressIP.hnsID = hnsLoadBalancer.hnsID klog.V(3).InfoS(\"Hns LoadBalancer resource created for loadBalancer Ingress resources\", \"lbIngressIP\", lbIngressIP) } svcInfo.policyApplied = true klog.V(2).InfoS(\"Policy successfully applied for service\", \"serviceInfo\", svcInfo)", "commid": "kubernetes_pr_106936"}], "negative_passages": []} {"query_id": "q-en-kubernetes-3e7e95e6f142ca39e6e3ca0b78c3932bd6cbdfdc898eed9d3ffda27d28f904c7", "query": "We should make the change in PR API reference links to should use HTTPS, not HTTP Visit h and look at the IANA hyperlinks Once this fix is made, a PR to is needed to update the docs (by autogenerating them). This typically would be deferred until the next Kubernetes release. v1.23 not applicable not applicable not applicable not applicable not applicable\n/sig docs\nThis issue is currently awaiting triage. If a SIG or subproject determines this is a relevant issue, they will accept it by applying the label and provide further guidance. The label can be by org members by writing in a comment. \"description\": \"The application protocol for this port. This field follows standard Kubernetes label syntax. Un-prefixed names are reserved for IANA standard service names (as per RFC-6335 and http://www.iana.org/assignments/service-names). Non-standard protocols should use prefixed names such as mycompany.com/my-custom-protocol.\", \"description\": \"The application protocol for this port. This field follows standard Kubernetes label syntax. Un-prefixed names are reserved for IANA standard service names (as per RFC-6335 and https://www.iana.org/assignments/service-names). Non-standard protocols should use prefixed names such as mycompany.com/my-custom-protocol.\", \"type\": \"string\" }, \"name\": {", "commid": "kubernetes_pr_107603"}], "negative_passages": []} {"query_id": "q-en-kubernetes-3e7e95e6f142ca39e6e3ca0b78c3932bd6cbdfdc898eed9d3ffda27d28f904c7", "query": "We should make the change in PR API reference links to should use HTTPS, not HTTP Visit h and look at the IANA hyperlinks Once this fix is made, a PR to is needed to update the docs (by autogenerating them). This typically would be deferred until the next Kubernetes release. v1.23 not applicable not applicable not applicable not applicable not applicable\n/sig docs\nThis issue is currently awaiting triage. If a SIG or subproject determines this is a relevant issue, they will accept it by applying the label and provide further guidance. The label can be by org members by writing in a comment. \"description\": \"The application protocol for this port. This field follows standard Kubernetes label syntax. Un-prefixed names are reserved for IANA standard service names (as per RFC-6335 and http://www.iana.org/assignments/service-names). Non-standard protocols should use prefixed names such as mycompany.com/my-custom-protocol.\", \"description\": \"The application protocol for this port. This field follows standard Kubernetes label syntax. Un-prefixed names are reserved for IANA standard service names (as per RFC-6335 and https://www.iana.org/assignments/service-names). Non-standard protocols should use prefixed names such as mycompany.com/my-custom-protocol.\", \"type\": \"string\" }, \"name\": {", "commid": "kubernetes_pr_107603"}], "negative_passages": []} {"query_id": "q-en-kubernetes-3e7e95e6f142ca39e6e3ca0b78c3932bd6cbdfdc898eed9d3ffda27d28f904c7", "query": "We should make the change in PR API reference links to should use HTTPS, not HTTP Visit h and look at the IANA hyperlinks Once this fix is made, a PR to is needed to update the docs (by autogenerating them). This typically would be deferred until the next Kubernetes release. v1.23 not applicable not applicable not applicable not applicable not applicable\n/sig docs\nThis issue is currently awaiting triage. If a SIG or subproject determines this is a relevant issue, they will accept it by applying the label and provide further guidance. The label can be by org members by writing in a comment. \"appProtocol\": \"The application protocol for this port. This field follows standard Kubernetes label syntax. Un-prefixed names are reserved for IANA standard service names (as per RFC-6335 and http://www.iana.org/assignments/service-names). Non-standard protocols should use prefixed names such as mycompany.com/my-custom-protocol.\", \"appProtocol\": \"The application protocol for this port. This field follows standard Kubernetes label syntax. Un-prefixed names are reserved for IANA standard service names (as per RFC-6335 and https://www.iana.org/assignments/service-names). Non-standard protocols should use prefixed names such as mycompany.com/my-custom-protocol.\", } func (EndpointPort) SwaggerDoc() map[string]string {", "commid": "kubernetes_pr_107603"}], "negative_passages": []} {"query_id": "q-en-kubernetes-3e7e95e6f142ca39e6e3ca0b78c3932bd6cbdfdc898eed9d3ffda27d28f904c7", "query": "We should make the change in PR API reference links to should use HTTPS, not HTTP Visit h and look at the IANA hyperlinks Once this fix is made, a PR to is needed to update the docs (by autogenerating them). This typically would be deferred until the next Kubernetes release. v1.23 not applicable not applicable not applicable not applicable not applicable\n/sig docs\nThis issue is currently awaiting triage. If a SIG or subproject determines this is a relevant issue, they will accept it by applying the label and provide further guidance. The label can be by org members by writing in a comment. \"appProtocol\": \"The application protocol for this port. This field follows standard Kubernetes label syntax. Un-prefixed names are reserved for IANA standard service names (as per RFC-6335 and http://www.iana.org/assignments/service-names). Non-standard protocols should use prefixed names such as mycompany.com/my-custom-protocol.\", \"appProtocol\": \"The application protocol for this port. This field follows standard Kubernetes label syntax. Un-prefixed names are reserved for IANA standard service names (as per RFC-6335 and https://www.iana.org/assignments/service-names). Non-standard protocols should use prefixed names such as mycompany.com/my-custom-protocol.\", \"port\": \"The port that will be exposed by this service.\", \"targetPort\": \"Number or name of the port to access on the pods targeted by the service. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. If this is a string, it will be looked up as a named port in the target Pod's container ports. If this is not specified, the value of the 'port' field is used (an identity map). This field is ignored for services with clusterIP=None, and should be omitted or set equal to the 'port' field. More info: https://kubernetes.io/docs/concepts/services-networking/service/#defining-a-service\", \"nodePort\": \"The port on each node on which this service is exposed when type is NodePort or LoadBalancer. Usually assigned by the system. If a value is specified, in-range, and not in use it will be used, otherwise the operation will fail. If not specified, a port will be allocated if this Service requires one. If this field is specified when creating a Service which does not need it, creation will fail. This field will be wiped when updating a Service to no longer need it (e.g. changing type from NodePort to ClusterIP). More info: https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport\",", "commid": "kubernetes_pr_107603"}], "negative_passages": []} {"query_id": "q-en-kubernetes-3e7e95e6f142ca39e6e3ca0b78c3932bd6cbdfdc898eed9d3ffda27d28f904c7", "query": "We should make the change in PR API reference links to should use HTTPS, not HTTP Visit h and look at the IANA hyperlinks Once this fix is made, a PR to is needed to update the docs (by autogenerating them). This typically would be deferred until the next Kubernetes release. v1.23 not applicable not applicable not applicable not applicable not applicable\n/sig docs\nThis issue is currently awaiting triage. If a SIG or subproject determines this is a relevant issue, they will accept it by applying the label and provide further guidance. The label can be by org members by writing in a comment. // RFC-6335 and http://www.iana.org/assignments/service-names). // RFC-6335 and https://www.iana.org/assignments/service-names). // Non-standard protocols should use prefixed names such as // mycompany.com/my-custom-protocol. // +optional", "commid": "kubernetes_pr_107603"}], "negative_passages": []} {"query_id": "q-en-kubernetes-3e7e95e6f142ca39e6e3ca0b78c3932bd6cbdfdc898eed9d3ffda27d28f904c7", "query": "We should make the change in PR API reference links to should use HTTPS, not HTTP Visit h and look at the IANA hyperlinks Once this fix is made, a PR to is needed to update the docs (by autogenerating them). This typically would be deferred until the next Kubernetes release. v1.23 not applicable not applicable not applicable not applicable not applicable\n/sig docs\nThis issue is currently awaiting triage. If a SIG or subproject determines this is a relevant issue, they will accept it by applying the label and provide further guidance. The label can be by org members by writing in a comment. \"appProtocol\": \"The application protocol for this port. This field follows standard Kubernetes label syntax. Un-prefixed names are reserved for IANA standard service names (as per RFC-6335 and http://www.iana.org/assignments/service-names). Non-standard protocols should use prefixed names such as mycompany.com/my-custom-protocol.\", \"appProtocol\": \"The application protocol for this port. This field follows standard Kubernetes label syntax. Un-prefixed names are reserved for IANA standard service names (as per RFC-6335 and https://www.iana.org/assignments/service-names). Non-standard protocols should use prefixed names such as mycompany.com/my-custom-protocol.\", } func (EndpointPort) SwaggerDoc() map[string]string {", "commid": "kubernetes_pr_107603"}], "negative_passages": []} {"query_id": "q-en-kubernetes-3e7e95e6f142ca39e6e3ca0b78c3932bd6cbdfdc898eed9d3ffda27d28f904c7", "query": "We should make the change in PR API reference links to should use HTTPS, not HTTP Visit h and look at the IANA hyperlinks Once this fix is made, a PR to is needed to update the docs (by autogenerating them). This typically would be deferred until the next Kubernetes release. v1.23 not applicable not applicable not applicable not applicable not applicable\n/sig docs\nThis issue is currently awaiting triage. If a SIG or subproject determines this is a relevant issue, they will accept it by applying the label and provide further guidance. The label can be by org members by writing in a comment. \"description\": \"The application protocol for this port. This field follows standard Kubernetes label syntax. Un-prefixed names are reserved for IANA standard service names (as per RFC-6335 and http://www.iana.org/assignments/service-names). Non-standard protocols should use prefixed names such as mycompany.com/my-custom-protocol. This is a beta field that is guarded by the ServiceAppProtocol feature gate and enabled by default.\", \"description\": \"The application protocol for this port. This field follows standard Kubernetes label syntax. Un-prefixed names are reserved for IANA standard service names (as per RFC-6335 and https://www.iana.org/assignments/service-names). Non-standard protocols should use prefixed names such as mycompany.com/my-custom-protocol. This is a beta field that is guarded by the ServiceAppProtocol feature gate and enabled by default.\", \"type\": \"string\" }, \"name\": {", "commid": "kubernetes_pr_107603"}], "negative_passages": []} {"query_id": "q-en-kubernetes-3e7e95e6f142ca39e6e3ca0b78c3932bd6cbdfdc898eed9d3ffda27d28f904c7", "query": "We should make the change in PR API reference links to should use HTTPS, not HTTP Visit h and look at the IANA hyperlinks Once this fix is made, a PR to is needed to update the docs (by autogenerating them). This typically would be deferred until the next Kubernetes release. v1.23 not applicable not applicable not applicable not applicable not applicable\n/sig docs\nThis issue is currently awaiting triage. If a SIG or subproject determines this is a relevant issue, they will accept it by applying the label and provide further guidance. The label can be by org members by writing in a comment. \"description\": \"The application protocol for this port. This field follows standard Kubernetes label syntax. Un-prefixed names are reserved for IANA standard service names (as per RFC-6335 and http://www.iana.org/assignments/service-names). Non-standard protocols should use prefixed names such as mycompany.com/my-custom-protocol. This is a beta field that is guarded by the ServiceAppProtocol feature gate and enabled by default.\", \"description\": \"The application protocol for this port. This field follows standard Kubernetes label syntax. Un-prefixed names are reserved for IANA standard service names (as per RFC-6335 and https://www.iana.org/assignments/service-names). Non-standard protocols should use prefixed names such as mycompany.com/my-custom-protocol. This is a beta field that is guarded by the ServiceAppProtocol feature gate and enabled by default.\", \"type\": \"string\" }, \"name\": {", "commid": "kubernetes_pr_107603"}], "negative_passages": []} {"query_id": "q-en-kubernetes-3e7e95e6f142ca39e6e3ca0b78c3932bd6cbdfdc898eed9d3ffda27d28f904c7", "query": "We should make the change in PR API reference links to should use HTTPS, not HTTP Visit h and look at the IANA hyperlinks Once this fix is made, a PR to is needed to update the docs (by autogenerating them). This typically would be deferred until the next Kubernetes release. v1.23 not applicable not applicable not applicable not applicable not applicable\n/sig docs\nThis issue is currently awaiting triage. If a SIG or subproject determines this is a relevant issue, they will accept it by applying the label and provide further guidance. The label can be by org members by writing in a comment. \"description\": \"The application protocol for this port. This field follows standard Kubernetes label syntax. Un-prefixed names are reserved for IANA standard service names (as per RFC-6335 and http://www.iana.org/assignments/service-names). Non-standard protocols should use prefixed names such as mycompany.com/my-custom-protocol.\", \"description\": \"The application protocol for this port. This field follows standard Kubernetes label syntax. Un-prefixed names are reserved for IANA standard service names (as per RFC-6335 and https://www.iana.org/assignments/service-names). Non-standard protocols should use prefixed names such as mycompany.com/my-custom-protocol.\", \"type\": \"string\" }, \"name\": {", "commid": "kubernetes_pr_107603"}], "negative_passages": []} {"query_id": "q-en-kubernetes-2214f549829b0bccc5ce423eaca68e7f13803c72fd2591e1ab24d03ad7fdf316", "query": "Condition PIDPressure is not set to true when the system thread number exceeds even it's reaching the limit. Condition PIDPressure is set to true correctly See cause below The bug is due to , which truncates any process number greater than because of type conversion. See kernel code . Reading it from should be working. v1.23.1 \"fmt\" \"io/ioutil\" \"strconv\" \"strings\" \"syscall\" \"time\" \"k8s.io/apimachinery/pkg/apis/meta/v1\" v1 \"k8s.io/apimachinery/pkg/apis/meta/v1\" statsapi \"k8s.io/kubelet/pkg/apis/stats/v1alpha1\" )", "commid": "kubernetes_pr_107108"}], "negative_passages": []} {"query_id": "q-en-kubernetes-2214f549829b0bccc5ce423eaca68e7f13803c72fd2591e1ab24d03ad7fdf316", "query": "Condition PIDPressure is not set to true when the system thread number exceeds even it's reaching the limit. Condition PIDPressure is set to true correctly See cause below The bug is due to , which truncates any process number greater than because of type conversion. See kernel code . Reading it from should be working. v1.23.1 var info syscall.Sysinfo_t syscall.Sysinfo(&info) procs := int64(info.Procs) rlimit.NumOfRunningProcesses = &procs // Prefer to read \"/proc/loadavg\" when possible because sysinfo(2) // returns truncated number when greater than 65538. See // https://github.com/kubernetes/kubernetes/issues/107107 if procs, err := runningTaskCount(); err == nil { rlimit.NumOfRunningProcesses = &procs } else { var info syscall.Sysinfo_t syscall.Sysinfo(&info) procs := int64(info.Procs) rlimit.NumOfRunningProcesses = &procs } rlimit.Time = v1.NewTime(time.Now()) return rlimit, nil } func runningTaskCount() (int64, error) { // Example: 1.36 3.49 4.53 2/3518 3715089 bytes, err := ioutil.ReadFile(\"/proc/loadavg\") if err != nil { return 0, err } fields := strings.Fields(string(bytes)) if len(fields) < 5 { return 0, fmt.Errorf(\"not enough fields in /proc/loadavg\") } subfields := strings.Split(fields[3], \"/\") if len(subfields) != 2 { return 0, fmt.Errorf(\"error parsing fourth field of /proc/loadavg\") } return strconv.ParseInt(subfields[1], 10, 64) } ", "commid": "kubernetes_pr_107108"}], "negative_passages": []} {"query_id": "q-en-kubernetes-9f83a27f9592105435bbac386998288b990d6f5d165b5fe2e78e2c4c68ba6eee", "query": "There is no receive or close operation on channel of struct () . Thus, a goroutine leak happens when is executed (), such as when running . no goroutine leak run TestRunPositiveRegister() No response defer func() { require.NoError(t, p.Stop()) }() timestampBeforeRegistration := time.Now() dsw.AddOrUpdatePlugin(socketPath) waitForRegistration(t, socketPath, timestampBeforeRegistration, asw)", "commid": "kubernetes_pr_115617"}], "negative_passages": []} {"query_id": "q-en-kubernetes-c3c001164d06967bffc88ae4a064b83e4730508e6212066ef72f1419123a3c10", "query": "I deployed a pod with and on a namesapace with a set with and elements and I get the error: The pod to be deployed Create a new namespace Create (using a manifest file named resource-quota-) and verify a named compute-resources Content of the resource-quota- file Create (using a manifest file named ) a pod named high-priority with the and for both and . Content of the file The pod is deployed correctly if I delete the missingSet := sets.NewString() missingSetResourceToContainerNames := make(map[string]sets.String) for i := range pod.Spec.Containers { enforcePodContainerConstraints(&pod.Spec.Containers[i], requiredSet, missingSet) enforcePodContainerConstraints(&pod.Spec.Containers[i], requiredSet, missingSetResourceToContainerNames) } for i := range pod.Spec.InitContainers { enforcePodContainerConstraints(&pod.Spec.InitContainers[i], requiredSet, missingSet) enforcePodContainerConstraints(&pod.Spec.InitContainers[i], requiredSet, missingSetResourceToContainerNames) } if len(missingSet) == 0 { if len(missingSetResourceToContainerNames) == 0 { return nil } return fmt.Errorf(\"must specify %s\", strings.Join(missingSet.List(), \",\")) var resources = sets.NewString() for resource := range missingSetResourceToContainerNames { resources.Insert(resource) } var errorMessages = make([]string, 0, len(missingSetResourceToContainerNames)) for _, resource := range resources.List() { errorMessages = append(errorMessages, fmt.Sprintf(\"%s for: %s\", resource, strings.Join(missingSetResourceToContainerNames[resource].List(), \",\"))) } return fmt.Errorf(\"must specify %s\", strings.Join(errorMessages, \"; \")) } // GroupResource that this evaluator tracks", "commid": "kubernetes_pr_107210"}], "negative_passages": []} {"query_id": "q-en-kubernetes-c3c001164d06967bffc88ae4a064b83e4730508e6212066ef72f1419123a3c10", "query": "I deployed a pod with and on a namesapace with a set with and elements and I get the error: The pod to be deployed Create a new namespace Create (using a manifest file named resource-quota-) and verify a named compute-resources Content of the resource-quota- file Create (using a manifest file named ) a pod named high-priority with the and for both and . Content of the file The pod is deployed correctly if I delete the func enforcePodContainerConstraints(container *corev1.Container, requiredSet, missingSet sets.String) { func enforcePodContainerConstraints(container *corev1.Container, requiredSet sets.String, missingSetResourceToContainerNames map[string]sets.String) { requests := container.Resources.Requests limits := container.Resources.Limits containerUsage := podComputeUsageHelper(requests, limits) containerSet := quota.ToSet(quota.ResourceNames(containerUsage)) if !containerSet.Equal(requiredSet) { difference := requiredSet.Difference(containerSet) missingSet.Insert(difference.List()...) if difference := requiredSet.Difference(containerSet); difference.Len() != 0 { for _, diff := range difference.List() { if _, ok := missingSetResourceToContainerNames[diff]; !ok { missingSetResourceToContainerNames[diff] = sets.NewString(container.Name) } else { missingSetResourceToContainerNames[diff].Insert(container.Name) } } } } }", "commid": "kubernetes_pr_107210"}], "negative_passages": []} {"query_id": "q-en-kubernetes-c3c001164d06967bffc88ae4a064b83e4730508e6212066ef72f1419123a3c10", "query": "I deployed a pod with and on a namesapace with a set with and elements and I get the error: The pod to be deployed Create a new namespace Create (using a manifest file named resource-quota-) and verify a named compute-resources Content of the resource-quota- file Create (using a manifest file named ) a pod named high-priority with the and for both and . Content of the file The pod is deployed correctly if I delete the Name: \"dummy\", Resources: api.ResourceRequirements{ Requests: api.ResourceList{api.ResourceCPU: resource.MustParse(\"1m\")}, Limits: api.ResourceList{api.ResourceCPU: resource.MustParse(\"2m\")}, }, }}, }, }, required: []corev1.ResourceName{corev1.ResourceMemory}, err: `must specify memory for: dummy`, }, \"multiple init container resource missing\": { pod: &api.Pod{ Spec: api.PodSpec{ InitContainers: []api.Container{{ Name: \"foo\", Resources: api.ResourceRequirements{ Requests: api.ResourceList{api.ResourceCPU: resource.MustParse(\"1m\")}, Limits: api.ResourceList{api.ResourceCPU: resource.MustParse(\"2m\")}, }, }, { Name: \"bar\", Resources: api.ResourceRequirements{ Requests: api.ResourceList{api.ResourceCPU: resource.MustParse(\"1m\")}, Limits: api.ResourceList{api.ResourceCPU: resource.MustParse(\"2m\")},", "commid": "kubernetes_pr_107210"}], "negative_passages": []} {"query_id": "q-en-kubernetes-c3c001164d06967bffc88ae4a064b83e4730508e6212066ef72f1419123a3c10", "query": "I deployed a pod with and on a namesapace with a set with and elements and I get the error: The pod to be deployed Create a new namespace Create (using a manifest file named resource-quota-) and verify a named compute-resources Content of the resource-quota- file Create (using a manifest file named ) a pod named high-priority with the and for both and . Content of the file The pod is deployed correctly if I delete the err: `must specify memory`, err: `must specify memory for: bar,foo`, }, \"container resource missing\": { pod: &api.Pod{ Spec: api.PodSpec{ Containers: []api.Container{{ Name: \"dummy\", Resources: api.ResourceRequirements{ Requests: api.ResourceList{api.ResourceCPU: resource.MustParse(\"1m\")}, Limits: api.ResourceList{api.ResourceCPU: resource.MustParse(\"2m\")}, }, }}, }, }, required: []corev1.ResourceName{corev1.ResourceMemory}, err: `must specify memory for: dummy`, }, \"multiple container resource missing\": { pod: &api.Pod{ Spec: api.PodSpec{ Containers: []api.Container{{ Name: \"foo\", Resources: api.ResourceRequirements{ Requests: api.ResourceList{api.ResourceCPU: resource.MustParse(\"1m\")}, Limits: api.ResourceList{api.ResourceCPU: resource.MustParse(\"2m\")}, }, }, { Name: \"bar\", Resources: api.ResourceRequirements{ Requests: api.ResourceList{api.ResourceCPU: resource.MustParse(\"1m\")}, Limits: api.ResourceList{api.ResourceCPU: resource.MustParse(\"2m\")},", "commid": "kubernetes_pr_107210"}], "negative_passages": []} {"query_id": "q-en-kubernetes-c3c001164d06967bffc88ae4a064b83e4730508e6212066ef72f1419123a3c10", "query": "I deployed a pod with and on a namesapace with a set with and elements and I get the error: The pod to be deployed Create a new namespace Create (using a manifest file named resource-quota-) and verify a named compute-resources Content of the resource-quota- file Create (using a manifest file named ) a pod named high-priority with the and for both and . Content of the file The pod is deployed correctly if I delete the err: `must specify memory`, err: `must specify memory for: bar,foo`, }, \"container resource missing multiple\": { pod: &api.Pod{ Spec: api.PodSpec{ Containers: []api.Container{{ Name: \"foo\", Resources: api.ResourceRequirements{}, }, { Name: \"bar\", Resources: api.ResourceRequirements{}, }}, }, }, required: []corev1.ResourceName{corev1.ResourceMemory, corev1.ResourceCPU}, err: `must specify cpu for: bar,foo; memory for: bar,foo`, }, } evaluator := NewPodEvaluator(nil, clock.RealClock{})", "commid": "kubernetes_pr_107210"}], "negative_passages": []} {"query_id": "q-en-kubernetes-c3c001164d06967bffc88ae4a064b83e4730508e6212066ef72f1419123a3c10", "query": "I deployed a pod with and on a namesapace with a set with and elements and I get the error: The pod to be deployed Create a new namespace Create (using a manifest file named resource-quota-) and verify a named compute-resources Content of the resource-quota- file Create (using a manifest file named ) a pod named high-priority with the and for both and . Content of the file The pod is deployed correctly if I delete the t.Errorf(\"%s unexpected error: %v\", testName, err) t.Errorf(\"%s want: %v,got: %v\", testName, test.err, err) } } }", "commid": "kubernetes_pr_107210"}], "negative_passages": []} {"query_id": "q-en-kubernetes-a8d1b8b007f3b41febd8e28d8c3f6d762909a93ce545d0ad364f8e401a78d99f", "query": "In the implement of GCE mode WaitForAttach: when the runtime.GOOS = \"windows\" and it meets err , it should instead of . the code should be like this : None The modification is in order to match the code cosistant pattern. Any GCE Any Any Any Any\nThis issue is currently awaiting triage. If a SIG or subproject determines this is a relevant issue, they will accept it by applying the label and provide further guidance. The label can be by org members by writing in a comment. return \"\", err } return id, err return id, nil } partition := \"\"", "commid": "kubernetes_pr_107236"}], "negative_passages": []} {"query_id": "q-en-kubernetes-4fde01bf9e733eb02396ee79e06a0d3d7fd635df01f60f51582a52b831148901", "query": "/kind failing-test