For mid-sized networks under a single authority, where the engineer controls most or all of the routers and can monitor the network at any point, at least an attempt can be made to isolate performance problems. Traffic statistics gathered with tools like SNMP can identify major bottlenecks, but are not well suited for detailed analysis. Packet traces are invaluable, but understanding them can be time consuming. Faced with these obstacles, many organizations opt to "throw bandwidth" at their performance problems, and this should seriously be considered as a faster and often cheaper solution than picking through every packet that comes over the wire.
A good start for the engineer interested in performance problems is the literature. Nagle's RFC 896, examining congestion problems in 1984, coined the term congestion collapse, referring to a stable condition where a network becomes flooded with retransmissions. More recently, Van Jacobson's RFC 1323 discusses possible problems with TCP round-trip-time estimates.
The late 1980s were the heyday of Internet performance analysis and made Van Jacobson something of a celebrity for his keen insight on TCP performance issues. Yet the last few years, despite heightened interest in Internet, have yielded few major performance enhancements. There is no reason to think the issue closed. Even during normal operation, many TCP implementations will retransmit, and almost all will expand their windows far more than the network topology dictates. Furthermore, the explosion of traffic volume has dulled Internet engineers' perception of network operation, while their attention has been diverted by the flood of new users clamoring for help. Monitoring net performance is increasingly the responsibility of SNMP-based automated management tools. One thing seems sure - the Internet protocols are better then those of a decade ago, but our understanding of Internet's performance problems is almost certainly poorer.
\end{soapbox}