Actually, no it isn't being nitpicky. In the networking world there is a very clear distinction between them. Bandwidth is capacity. Increasing capacity of a container does not equate to increasing the speed at which it can travel across the same medium. It increases the payload... but the container still travels the same distance in the same amount of time (the typical definition of speed). Throughput is not the measure of performance when it comes to small packet communications like what is used in games and things like VOIP. Latency is the metric you use, because it is about response time, not how big a package you can deliver (provided you can encapsulate a large enough chunk, which has not been an issue since we crossed about the 384k bandwidth mark).
The difference is with one you may get up to 16MB delivered in 110ms with one, but get up to 75MB delivered with the other. The time for that transfer can STILL be 110ms in both cases. The difference is you packed stuff in a bigger box that was traveling at the same speed. When the package is able to be delivered in the same number of handshakes, it takes the same amount of time to complete the process because both lines are moving the same number of packages at the same speed. So a 2K package that gets broken down into 1460 byte chunks will transfer in the same amount of time because it will be just two transferable units on both lines, both of which are in reality transferring the data at the same speed. 75MB takes longer to deliver the complete package on the narrower line because it can't send it all in one complete cycle--it has to break it up into 5 passes.... but each 16MB pass takes the same amount of time as the one 75MB pass because that is the capacity of the line and not the speed.
So, if the encapsulation is moving 2KB of data, and that can be transferred in the same number of handshakes on both lines with 80ms latency--both lines will deliver that same 2k in the same amount of time... they transfer the individual bits across the same number of channels (again, there is no issue of over-saturation here) at the same speed as determined by the combination of the medium's transfer speed, distance, and additional factors that determine the latency of the line (like delays in route). That is, in laymen's terms, what latency is--Delay. It is basically the amount of time it takes for the communication's round trip cycle to take place. It just isn't represented in the normal nomenclature many are accustomed to like MPH or KPH. It's just a flat measure of time, because when it comes to response times in this situation, that is what we need to know--how long it took to complete.
Line speed is more or less constant... it's bound to the physical characteristic of the medium (electricity vs light). Latency is the delay and is a product of distance traveled at the line speed, which can be impacted by things like noise, reflection, flat out interruptions that require retransmission. To a certain extent, Latency has a bottom that you can not reduce it past because it is a product of speed and distance. Bandwidth however is easy to manipulate--more channels, more wires, etc. increases the bandwidth. Bandwidth combines with your line speed and latency to determine overall throughput for a given period of time. So no.. Bandwidth is not speed. It may be perceived as such because you are comparing between over/under saturation of the bandwidth on different lines, but that is not speed. That is throughput, not speed.