Notes: hacking DVO, kubernetes/website issues review/triage, sharing good-first-issues with folks in community, Kubernetes sig-security Tooling meeting, hacking client-generator code #14
May 18, 2021
Quick notes from today:
hacking DVO
DVO aka Deployment Validation Operator is an openshift-sdk based operator, which checks deployments and other resources against a curated collection of best practices. Some folks from Red Hat’s AppSRE team are working on building this operator. This is majorly a work in progress currently, & I’m trying to getting involved in this quite early itself. As part of my getting onboarded, I spent couple of hours in the morning hacking the existing implemenation & below is what I did:
- I’ve deployed the DVO operator on a local code-ready-container (crc) cluster.
- I can see the gauge metrics in one of the dvo-operator pod logs.
- I spent time reading the code under the deployment-validation-operator/pkg, more specifically the validation tests. What I understood there, is:
- replicas_test.go is analogous to
dv_replicas
gauge metrics. - requests_limits_test.go is analogous to
dv_requests_limits
gauge metrics.
- replicas_test.go is analogous to
- I had a question:
- The gauge metrics
dv_replicas
&dv_requests_limits
, are these being exported to Prometheus with the help of some exporter, when I deploy DVO? I was not able to query them on Prometheus. Maybe I’m unable to understand something here? - Just got the following answer:
DVO generates them but to see them in prometheus via the prometheus operator, you would need to create a “service monitor” in the namespace of the prometheus operator. At least, that’s how app-sre has that setup. It’s because the prometheus operator we deploy, doesn’t find the “service monitors” in any namespace except its own. I suspect that may be the case for crc as well.
- The gauge metrics
- And, I’m yet to try doing this part ~ making DVO run on top of kube-linter.
kubernetes/website issues review/triage
I triaged/reviewed the following 4 issues from the kubernetes/website project. (& a lot more actually. I didn’t include because there was not any action on them from me):
- Issue with k8s.io/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/ #27931
- For this one, I raised a PR as well ~ fix the kubelet-config-1.21 configmap key value from config to kubelet #28025. It’s labelled
lgtm
now.
- For this one, I raised a PR as well ~ fix the kubelet-config-1.21 configmap key value from config to kubelet #28025. It’s labelled
- Last command of “Interactive Tutorial - Exposing Your App” interactive lesson is deprecated #27056
- List All Container Images Running in a Cluster (#27475)
- Unclear requirements for “Using Source IP” tutorial #27162
sharing good-first-issues with folks in community
I shared the last two issues (I mentioned above) in the kubernetes in-dev
slack channel, after I was done prepping them with the solutions. The only thing I left for the folks attempting, was to right away implement it. Well, people might call it spoon feeding (which I myself am totally against of). But the fact is, the kubernetes project has become really complex that the entry barrier has become more emotionally overwhelming, if not anything else. So, yea, that’s what I did today, tried to help someone getting started. Alteast making it easy for two of the folks to start.
It made me really happy. I recalled jason braganza telling me, that paying it forward is the only way to paying it back. I’m trying to do it. 🙂
Kubernetes sig-security Tooling meeting
Attended the very first meeting of a new sub-group/working-group under sig-security
. This new sub-group is known as sig-security-tooling
. Today’s meeting was more of an introduction to the project charter, setting up expectations & making everyone comfortable by discussing what could be different ways to get involved. I’ll follow up with the coming updates on the project & wherever I’ll get opportunities, I’ll definitely contribute back. But it was lovely talking to some wonderful folks like Tabitha Sable, Pushkar Joglekar and so many others.
hacking client-generator code
I’m currently trying to learn, how I can generate the entire kubernetes-python/client
source code from scratch. I tried with the following so far:
- I tried hacking the kubernetes-client/gen tool.
- Following the
README
doc for the client generator, I attempted the stuff below:- Cloned the
kubernetes-client/gen
repo & ran the following command inside the cloned repo ~./openapi/python.sh python-client/ settings-file.sh
- where
python-client
is the output directory. settings-file.sh
has these content inside it ~export KUBERNETES_BRANCH="master" export CLIENT_VERSION="v17.17.0" export PACKAGE_NAME="client"
- where
- Cloned the
- But building this ended up in the following error:
Step 26/26 : ENTRYPOINT ["mvn-entrypoint.sh", "/generate_client.sh"]
---> Using cache
---> bd1d3efce8aa
Successfully built bd1d3efce8aa
Successfully tagged psaggu-kubernetes-python-client-gen-with-openapi-generator:v1
--- Running generator inside container...
/source/openapi-generator /
/
--- Downloading and pre-processing OpenAPI spec
/usr/local/lib/python2.7/dist-packages/urllib3/connectionpool.py:847: InsecureRequestWarning: Unverified HTTPS request is being made. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings
InsecureRequestWarning)
Error downloading spec file https://raw.githubusercontent.com/psaggu/kubernetes/master/api/openapi-spec/swagger.json. Reason: Not Found
- As I can see from the error message, it is because :
- In the spec file url ~ https://raw.githubusercontent.com/psaggu/kubernetes/master/api/openapi-spec/swagger.json, it is looking for the
swagger.json
file, in a kubernetes repo fork under some GitHub username,psaggu
. - But in my case it is, https://raw.githubusercontent.com/Priyankasaggu11929/kubernetes/master/api/openapi-spec/swagger.json
- In the spec file url ~ https://raw.githubusercontent.com/psaggu/kubernetes/master/api/openapi-spec/swagger.json, it is looking for the
- I discussed this with Haowei Kai, one of the project leads & he suggested the following:
- The simpler way to do it is, trying using the kubernetes-client/python/scripts/update-client.sh, which is how we generate the python client. It builds on top of the client-gen repo. It basically set the configuration for you.
- And for the issue I had above, this line controls that ~ kubernetes-client/gen/openapi/openapi-generator/client-generator.sh#L52. For username (that was coming in the spec file error above), it reads from the
USERNAME
env, so maybe I have to set it in my shell & try again.
That’s all for the day! It was a wonderful day today. 🙏
PS: Find here, the links to all the kubernetes-contributions for the month of May, 2021.