Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ability to skip HTTP requests on initial "ADDED" events #216

Open
pkosiec opened this issue Sep 27, 2022 · 9 comments · May be fixed by #217
Open

Ability to skip HTTP requests on initial "ADDED" events #216

pkosiec opened this issue Sep 27, 2022 · 9 comments · May be fixed by #217

Comments

@pkosiec
Copy link

pkosiec commented Sep 27, 2022

Hi,

Awesome project! This is exactly what I was looking for.

I use the request functionality to be notifier every time one of the watched ConfigMaps or Secrets is updated. However, during the container start, I'm getting multiple requests to my service about "ADDED" event, which I would like to avoid. It would be great to have it configurable so that my service is notified only when any other event occurs except the initial "Added" one.

I can contribute the feature as I already updated the code on my fork. Stay tuned!

@pkosiec pkosiec changed the title Ability to skip HTTP requests on "ADDED" events Ability to skip HTTP requests on initial "ADDED" events Sep 27, 2022
@quilicicf
Copy link

quilicicf commented Feb 6, 2023

I second all of the above! 🙂

I'm using the Jenkins Helm chart and noticed that the URL is called multiple times at startup which makes startup slower.

In fact, what would be even more awesome would be to bulk-apply changes. Currently, if the configuration is split in multiple files, the k8s-sidecar sends the HTTP call for each file, but it might call the URL with an incomplete (and broken) configuration if all files are not on the volume.

This can happen quite easily when using the Jenkins Helm chart but I'm sure it can happen in other cases. My Jenkins instance can end up in a pretty bad shape because it tries to load incomplete configuration. This can also happen at runtime, as we tend to update our configuration files in bulk.

What would be awesome would be to watch for changes, and if one is detected, process all the files at once and send a single HTTP call at the end.

Side note: in my use-case, I need the HTTP call to be issued at startup too, just once when all config maps are processed.

Would be open to working on this (I'm a newbie in Python though).

@jekkel
Copy link
Member

jekkel commented Feb 10, 2023

@quilicicf You could chime in and continue with the PR @pkosiec contributed, the following tasks are still open:

  • rebase with master
  • write tests

@quilicicf
Copy link

Will try to have a look 👍

@pkosiec
Copy link
Author

pkosiec commented Feb 10, 2023

Hey @quilicicf, Sure, feel free to continue my PR. Unfortunately, I didn't have time to update the tests, however the functionality works as it should.

@quilicicf
Copy link

I've got a rebased version of the branch now, but I'll need guidance on the tests.

I don't understand how they work currently, can't find the entrypoint 🤔

@jekkel
Copy link
Member

jekkel commented Feb 12, 2023

It's basically a bash script embedded into the github action... 🙈
Nobody took the the time to port that into a proper test framework but it does the job and is better than nothing.

@quilicicf
Copy link

No worries, just not used to GitHub actions yet, was looking for a Jenkinsfile or .travis.yml 🙃 Habits die hard.
I'm actually better with Bash than python, so that's mostly good news for me 😅

@quilicicf
Copy link

quilicicf commented Feb 17, 2023

I have had a look and I must say it's a bit overwhelming currently 😅

The tests require a K8s cluster to run so they're relatively cumbersome to run on a dev's machine and I'm having a hard time entangling input from output and what tests what :-(

I'm not sure I'm up to the task unfortunately, it feels too hard to contribute to the tests as they are and I don't think it's reasonable to suggest creating new tests from scratch either in a technical environment I'm not very comfortable with 🙁

@jekkel
Copy link
Member

jekkel commented Feb 21, 2023

I am sorry to hear that. Let me try to explain what's happening and maybe you change your mind?

  1. we build both the sidecar and a very simple python based http server over here, the images are not pushed to a registry but archived as build artifacts
  2. on each tested k8s (spun up via Kind) we load the images into the cluster and create 2 sidecar and 1 http server pod here
  3. we then load the initial "resources" (configmaps and secrets) into the cluster and wait for their names to appear in the logs of both sidecars here
  4. we archive all the logs so far as they might be valuable for debugging if the assertions fail later on
  5. using kubectl we download all the files we expect both sidecars to have created (or not) from the pods into the job workspace here
  6. we update the resources as the sidecar is expected to pick up updates es well, the changed resource definitions can be found here
  7. we then have a single, monstrous shell statement which includes all assertions using file diffing, file tests and kubectl calls and needs to fail if a single expected outcome has not been satisfied

Hope that helps...?

I know this can't really be tested easily on a dev machine but we can run the workflow on the PR pretty easy as often as necessary.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants