During each testing cycle, the
application performs five tests: sending and receiving TCP data,
sending and receiving UDP data, and sending and receiving a time
probe packet. Based on these tests, it computes TCP and UDP
upstream and downstream throughput values (current, for the latest
test, and averaged, for all tests), as well as the round-trip time.
When all tasks in a cycle are completed, a new cycle automatically
begins.
Throughput
Throughput (also often referred
to as "goodput") is the amount of application-layer data delivered
from the client to the server (upstream) or from the server to the
client (downstream) per second. The protocol overhead is not
included, so when we talk, for example, about the TCP throughput
rate of 1 Mbps, we mean that 125 Kbytes of actual data payload were
sent between two network nodes during one second, not including
TCP, IP, and Ethernet or 802.11 headers.
Packet Loss
Packet loss is applicable to UDP tests
only, because in TCP, all packets must be acknowledged and no data
loss may occur. UDP loss is calculated as the percentage of data
that was lost during transmission. For example, let's interpret the
following result:
UDP Down:
60.00 Mbps (Ave: 55.00), Loss: 40.0%
This means that during the
latest test cycle, the server sent 1 megabit of data in 10
milliseconds (actual data amount and duration may vary; we use this
number only as an example,) and the client received 0.6 megabits in
10 milliseconds, while 0.4 megabits were lost en route.
Round-Trip Time
Round-trip time (RTT) is the
length of time it takes for a data packet to be sent from the
client to the server and back. The application uses TCP packets for
RTT measurements.
|