How important is the concept of significant figures to applying maths? And if useful, what of the mechanics of computing the answer? Is significance truly significant in concept and calculation for today? And therefore should it be prominent in today's maths education?

Here's our surprising conclusion. Significance arithmetic should be far more significant than it is in real-life maths today, rather like its role (if not all the detail) in maths education where it is covered fairly extensively. It would really help to get good answers, far fewer misinterpretations, a view on whether any of the numbers are justifiable. But people just aren't using it much in their maths lives.

Yes, paradoxically, I'm saying that this is a case in which traditional maths education got it right(er) and real-world maths didn't know what was good for it! Education is ahead!

So, what's gone wrong with significance out there?

Let's start with traditional notation. If you write down 3.2, it's in practice unclear if you're being precise at justifiably saying "2 significant figures" or if really those were what came to hand or all you could be bothered to calculate by hand. The notation (like so much of traditional maths notation) is ambiguous, causing huge confusion to practitioners and particularly students.

Then there's how calculators and computers usually operate. They churn out whatever their hardware "machine precision" is--often 16 or 19 digits--even if it has no justification from input or calculation. People ignore the digits or if transcribed, just think those quoting them are ill-educated (rather like the misuse of apostrophes suggest poor education in English).

When you use significance arithmetic there are several stages that can trip you up. You have to be clear what your input precision really is. What is the error I'm inputting (eg. does 2 digits represent this)? But then the calculations you do dramatically change what significance can be justified coming out. For example, a steep slope of a function reduces relative significance coming out (eg. to 1 digit, because a little variation in the x value results in a big y change), whereas a small gradient can increase justifiable significant digits (eg. to 4 digits). That means it really matters not only which functions you're computing with, but where you are on the function.