At first, this may somewhat silly. After all, what's the point of a data communications network if you can't count on the data getting to its destination? To understand this better, refer to the concept of protocol layering. Data delivery is unreliable at the Network Layer, where IP operates. If an application requires reliable data delivery, the Transport Layer must provide this, based upon the unreliable delivery facilities provided by the Network Layer. This is the main function of the TCP Protocol, which uses sequence numbers and timeouts to detect data loss, and then retransmits lost data until it is received and acknowledged.
So, why go through all this in the first place? Well, for one thing, if our network fails briefly at any point, in any way, there should be no serious problems. If a switching node becomes overloaded with traffic, it can simple discard some of the excess. If a link fails while a packet is being transfered, there's no need for an elaborate recovery procedure. The assumption of unreliable delivery, and the consequent demand that software be able to handle intermittent failures, significantly reduces demands on hardware and low-level software design. Sporadic network outages might slow the tempo, but the show will go on.
Unreliable delivery has been a mixed blessing for the Internet. It certainly has lived up to its billing for producing a fault-tolerant network, but has created almost as many problems as it has solved.
TCP guarantees data delivery, but makes no guarantees about how long that delivery will take. In some applications, such as telephone calls, this is simply unacceptable. If the data arrives too late, it is useless. Worse, TCP will stop everything to ensure retransmission of the lost data, possibly disrupting other data that could have arrived on time. Some Internet protocols, such as ST, have been proposed to address this problem, but none have gained widespread acceptance and all are a far cry from the guaranteed bandwidth of a phone call.
\end{soapbox}