Performance benchmarcks #1904
Replies: 17 comments
-
Awesome benchmarks 👍 Performance has been good enough for our use cases, but we probably need to spend some time optimising the concurrent fetch. I imagine it probably ties in very heavily with whatever our desired baching / caching solution is. |
Beta Was this translation helpful? Give feedback.
-
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
Beta Was this translation helpful? Give feedback.
-
No stale. |
Beta Was this translation helpful? Give feedback.
-
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
Beta Was this translation helpful? Give feedback.
-
Nope, stale. |
Beta Was this translation helpful? Give feedback.
-
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
Beta Was this translation helpful? Give feedback.
-
No stale. |
Beta Was this translation helpful? Give feedback.
-
Similar to feature comparison, we must do benchmark comparison and optimize. |
Beta Was this translation helpful? Give feedback.
-
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
Beta Was this translation helpful? Give feedback.
-
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
Beta Was this translation helpful? Give feedback.
-
@frederikhors please reopen it. Similar to feature comparison, we must do benchmark comparison as well. |
Beta Was this translation helpful? Give feedback.
-
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
Beta Was this translation helpful? Give feedback.
-
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
Beta Was this translation helpful? Give feedback.
-
Hello everyone!
I work on several projects that include Graphql + Go, some of them are high availability projects with millions of requests a day.
In several APIs that we have implemented we are using "graph-gophers/graphql-go" but we are considering changing the graphql library, we have done several tests with glqgen and the first impressions are very good. We are already starting the migration in some of our projects that are not in high availability.
One of our key teams, that works with an important high availability project, has tested different tools with Graphql + Go to find the most efficient solution. The results are hopeless.
We have detected that the response time becomes unsustainable when the number of nodes in the response increases.
We would like to share the results with you and check that the tests we have done are consistent and that we are not misusing the tool (gqlgen). Or see if you have any ideas on how to improve these results.
Any suggestion or comments would be appreciated, as I personally prefer to use "gqlgen" in our projects than the alternatives, but if the response time can not scale efficiently we are forced to look for other less elegant solutions, such as REST or gRPC.
¿What do you think?
We write our results into this github project. Below is a graph with the results.
Thanks so much!
Beta Was this translation helpful? Give feedback.
All reactions