-
Notifications
You must be signed in to change notification settings - Fork 929
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Active streams affect yamux::connection::rtt
#5387
Comments
Did you make any configuration changes to yamux? |
Thank you for reporting @0x7CFE!
Please link or post your configuration. Which The latest That said, it does prioritize sending over receiving. Though especially on slow connections, I expect the socket to return Based on the logs above, I assume you are seeing the large rtt on the receiver side. In case you are using In case you are using |
No, everything is default. The only setting I changed was idle connection timeout: let swarm = {
// let pings = libp2p_ping::Behaviour::new(libp2p_ping::Config::new());
let tasks = cbor::Behaviour::new(
[(StreamProtocol::new("/grad/tasks/1"), ProtocolSupport::Full)],
request_response::Config::default(),
);
let state = libp2p_stream::Behaviour::new();
libp2p::SwarmBuilder::with_existing_identity(net_keypair)
.with_tokio()
// .with_quic()
.with_tcp(
tcp::Config::default(),
noise::Config::new,
yamux::Config::default,
)?
.with_behaviour(|_| MyBehaviour { tasks, state })?
.with_swarm_config(|c| c.with_idle_connection_timeout(Duration::from_secs(300)))
.build()
}; |
My
Yes, it's the receiver that sent ping ages ago.
Thanks for the hint, I will try to update and see what would happen. |
I did cargo
I am sitting in a café with a really crappy internet (~300 Kb/s) and apparently pings are now moving faster. It could just be a TCP windowing fluke, but still, you can see that the first ping traveled for 87 seconds. At very the same time, the stream was transferring a megabyte chunk of data. After that hiccup, subsequent pings were delivered within a few seconds. It's way better than before, but still inconsistent. Could it be that stream granularity prevents multiplexer to schedule other packets somehow? |
Can you reproduce this behavior in other networks? What latency do you get with ICMP's ping? |
Yes, I saw similar behavior on other networks (see the issue description). The ping was a few hundred ms. |
Are you able to run with a patched Unstaged changes (1)
modified yamux/src/connection.rs
@@ -416,6 +416,33 @@ impl<T: AsyncRead + AsyncWrite + Unpin> Active<T> {
Poll::Pending => {}
}
+ if self.pending_read_frame.is_none() {
+ match self.socket.poll_next_unpin(cx) {
+ Poll::Ready(Some(frame)) => {
+ match self.on_frame(frame?)? {
+ Action::None => {}
+ Action::New(stream) => {
+ log::trace!("{}: new inbound {} of {}", self.id, stream, self);
+ return Poll::Ready(Ok(stream));
+ }
+ Action::Ping(f) => {
+ log::trace!("{}/{}: pong", self.id, f.header().stream_id());
+ self.pending_read_frame.replace(f.into());
+ }
+ Action::Terminate(f) => {
+ log::trace!("{}: sending term", self.id);
+ self.pending_read_frame.replace(f.into());
+ }
+ }
+ continue;
+ }
+ Poll::Ready(None) => {
+ return Poll::Ready(Err(ConnectionError::Closed));
+ }
+ Poll::Pending => {}
+ }
+ }
+
if self.pending_write_frame.is_none() {
match self.stream_receivers.poll_next_unpin(cx) {
Poll::Ready(Some((_, Some(StreamCommand::SendFrame(frame))))) => {
@@ -448,33 +475,6 @@ impl<T: AsyncRead + AsyncWrite + Unpin> Active<T> {
}
}
- if self.pending_read_frame.is_none() {
- match self.socket.poll_next_unpin(cx) {
- Poll::Ready(Some(frame)) => {
- match self.on_frame(frame?)? {
- Action::None => {}
- Action::New(stream) => {
- log::trace!("{}: new inbound {} of {}", self.id, stream, self);
- return Poll::Ready(Ok(stream));
- }
- Action::Ping(f) => {
- log::trace!("{}/{}: pong", self.id, f.header().stream_id());
- self.pending_read_frame.replace(f.into());
- }
- Action::Terminate(f) => {
- log::trace!("{}: sending term", self.id);
- self.pending_read_frame.replace(f.into());
- }
- }
- continue;
- }
- Poll::Ready(None) => {
- return Poll::Ready(Err(ConnectionError::Closed));
- }
- Poll::Pending => {}
- }
- }
-
// If we make it this far, at least one of the above must have registered a waker.
return Poll::Pending;
} |
Do I understand correctly, that you are using the same |
Will try when I have time. Probably this week.
Yes, both nodes run the same binary, so I'm pretty sure they both use the same version. |
Summary
I noticed that
yamux::connection::rtt
reports roundtrip times up to 70 seconds when stream is active, especially on slower connections.When stream stops transferring data, rtt drops to normal values of few hundred ms.
Expected behavior
I expect that streams should make no difference and RTT messages should fly freely nevertheless.
Actual behavior
PONG message can be delayed up to 1 minute even if the channel itself is 12 Mbit/s and real latency is around 250 ms.
Relevant log output
Possible Solution
Check if stream API uses/obeys multiplexing. RTT as well as other control messages should probably have higher priority.
Version
Would you like to work on fixing this bug ?
Maybe
The text was updated successfully, but these errors were encountered: