Not all real numbers can be represented by means of a finite number of digits, and computers, with their limited amount of storage, are forced to work only with a finite subset of them, \(F\) say. In general, this subset is not closed under any of the basic arithmetic operations, and when the result cannot be represented exactly with the available digits, it is replaced by a number that is in \(F\) and is not too far from the "right" answer. This process, known as rounding, introduces a swarm of tiny erros that, if not taken care of, can grow indefinitely and significantly deteriorate the results of most numerical computations. In order to get a clearer picture of the limitations and shortfalls of computer arithmetic, this talk will try to answer the following questiions:
- Can we actually see roundoff errors?
- Can roundoff errors ever be beneficial?
- What is numerical cancellation, and when is it "catastrophic"?
- What does it mean for an algorithm to be stable?