Currently we compute the bandwidth and latency at spice client connection. The monitoring is done from the server.
This computation has several problems:
* It is inaccurate. We send a large buffer filled with zeros. This is easy to compress by any on the way router * it also slows down starting connexion * It may suffer from sampling error. Sampling only once might miss the true average network performance (or current average). * It can miss a change in network performance (user on the go) * The bandwidth and latency variance are also interesting.
To solve this, we should change to checking the latency and bandwidth continuously and "piggy back" on existing payloads we send:
* start at lan mode * check bandwidth and latency on a single channel. * on multiple channels? these are different sockets. can be subject to different routing rules? * use PING messages on the channel being monitored to check latency. * how regularly? * count bytes for messages in transit to compute bandwidth. * when enough data in burst, send qos start mark - peer reply when finished -> update stats * additionally/alternatively, we could just check the amount of messages in the pipe - if there are many for a large number of samples assume this is wan (momentary large pipe is expected also on lan conditions due to driver burst of activity) * have a callback list for bandwidth conditions change to trigger channel behavior changes, and easy place for logging.
other notes from Marc:
* whenever there is (uplink or downlink) a certain amount of data queued (>= BANDWIDTH_THRESHOLD, or combined with timed elapsed since last measure or deviation etc.. eventually stuffed with empty data to improve computation) * send a (QOS) BANDWIDTH_PING(size) with the length of the data to send * send the data * after the peer has received the data, reply with BANDWIDTH_PONG * update the estimation of bandwidth. * latency measure is likely to be light, and could be done regularly
See also TFRC: http://www.icir.org/tfrc/