Unpacking these sentiments is enlightening. Effectively the clamour was for a detailed model and computation of what leaving the EU versus staying in might mean, particularly in practical financial ways like affordability of housing.

The fact is, no-one knows, even approximately. In practice you can't predict it, not with today's methodologies. The ecosystem is too complex, with huge numbers of feedback loops and linked components, many of which even individually are almost unknowable.

Sometimes the error bars swamp the value. In the end there's too much variability to say anything much quantitative. You can surmise things like "there'll be a shock if we exit", but not its detailed consequence or even whether saying this is perturbing the consequence eg. is self-fulfilling.

What's amazing is that I'm having to say all this. I wouldn't have had to 100 years ago: there wouldn't have been any concept that such predictions could be computable. But in recent times, real-world maths and computation have in many ways been so successful for so many predictions that societally there's an assumption we can "compute anything" or always quantitatively predict with some accuracy how decisions will play out in whatever field.

"A key part of using any powerful tool—computation included—is knowing when it works and when it doesn't."

Don't get me wrong. I'm a keen advocate for computing answers, driving decision-making and optimising actions with computation; I spend much of my working life driving more use of computation; I think we're only at the start of where it can take us in increasingly broad swaths of life. Indeed, since the mid-twentieth century, I'd argue that the rise of mechanised computation continues to be the biggest underlying driver of human progress—particularly through engineering and hard science.

But a key part of using any powerful tool—computation included—is knowing when it works and when it doesn't. With no effective, general education in computational thinking, most people can easily be mislead, and they are.

This educational failure is a major part of what's caused "post-truth". Years of apparently precise, prominent predictions with at best over-stated accuracy or worse, that are just wrong. "Experts" push numbers to assume an importance beyond their ability to inform to the point where a sizeable fraction of our population, given no computational education to fall back on, no longer believes any logic, any number, any expert.

I remember a blind "belief in computation" starting to take hold in the 1980s crystallised in particular for me through a conversation with a friend at school. Some early global climate predictions were starting and I was sceptical that they were right, whether over or under estimating the effects. He argued that if the predictions "made a good point" and garnered attention, it was important that scientists en masse were simplistically presenting them whether or not they really could justify their veracity. I argued that in the end this would backfire: if any of the widely accepted expert predictions failed, science, computation and logic would suffer huge rejection. Perhaps to needle my (Catholic) friend, I pointed to his church's insistence that it knew the sun orbited the earth in a perfect circle—and the damage this had done both in actions taken (eg. against scientists) and to its own credibility.

" 'Experts' push numbers to assume an importance...

...beyond their ability to inform."

The promulgators of predictions—politicians, campaigners and experts—certainly bear responsibility for post-truth. They get more publicity for being definitive on a "big issue" with "evidence" even if they're massively overstating the claim or its precision and damaging long-term credibility of their professions. Instead they need to be prepared to give real answers like "we don't know" or "the only prediction we know how to make assumes xxx, which is probably wrong".

But a major responsibility also lays with the public and in particular their mainstream education. They need experience and developed instinct in questioning models, science, computation. They need measured scepticism that comes of experience with real, messy situations, today's computational tools, manifested by ready-to-use questioning to help them pick apart expert statements. Things like "What are your assumptions?", "Why do you believe this is the basis?", "Have you forgotten any effects?", "What happens if there's a small error in your opening assumption?" and so forth. They need to be versed in dangerous scenarios to look out for and take special care over. For example, often I am more likely to believe localised, short-term predictions than global long-term ones because the likelihood of massive errors in the model tend to grow very sharply with time and complexity; there's often no control scenario either; and it takes too long to see effects of the prediction. That's a small example of experience I've developed.

Why isn't STEM and specifically maths education—as the only mainstream computation subject—teaching these vital topics? They need to be central but aren't. They aren't because they can't be with subject's overwhelming focus on hand-calculating.

Look at today's maths curricula around the world and you'll find scant coverage of topics like these, and none with needed modern computer-powered analysis of messy scenarios that give real experience.

Therefore I think a crucial step in the long journey to fixing the post-truth problem is laying out what we want from mainstream maths—why we've developed our new "outcomes" list as part of our computerbasedmath.org project. Here's one snippet...

FULL CBM draft outcomes LIst SUMMARY

Only when our populations can't so easily be misled by maths will they re-engage with its power to persuade. This is vital individually, and societally, or we may start down a path of mysticism, a new era of Unenlightenment.

]]>Firstly, I've got to say, I really like the term.

To my mind, the overriding purpose of education is "to enrich life" (yours, your society's, not just in "riches" but in meaning) and different ways in which you can think about how you look at ideas, challenges and opportunities seems crucial to achieving that.

Therefore using a term of the form “xxx Thinking" that cuts across boundaries but can support traditional school subjects (eg. History, English, Maths) and emphasises an approach to thinking is important to improving education.

Now we've had widespread use of the term "Critical Thinking" for sometime, but to me it has much less power of actuality than "Computational Thinking".

“Computation” is a highly definitive set of methodologies—a system for getting answers from questions, and one rapidly gaining in power and applicability each year. There is no parallel, definitive, “Critic” system, and even the related “Critiquing” is a rather vague skill bucket, not a systemic—and highly successful—roadmap. As a result, Critical Thinking often becomes more of an aspiration of student capability not a definable, definite, life-enriching set of problem-solving abilities.

To be specific, I'd argue that Computational Thinking is a mode of thinking about life in which you creatively and cleverly apply a 4-step problem-solving process to ideas, challenges and opportunities you encounter to make progress with them.

Here's how it works.

You start by **defining the question** that you really want to address—a step shared with most definitions of "Critical Thinking".

But computational thinking follows this with a crucial transitional step 2 in which you take these questions and **translate into abstract **computational language—be that code, diagrams, algorithms. This has several purposes. It means that 100s of years worth of figured out concepts and tools can be brought to bear on the question (usually by computer), because you've turned the question into a form ready for this high fidelity machinery to do its work. Another purpose of step 2 is in forcing a more precise definition of the question. In many cases this abstraction step is the most demanding of high conceptual understanding, creativity, experience and insight.

After abstraction comes the **computation** itself—step 3—where the question is transformed into an abstract answer—usually by a computer.

In step 4 we take this abstract answer, **interpret the results**, re-contextualising them in the scope of our original questions and sceptically verifying them.

The process rarely stops at that point because it can be applied over and over again with output informing the next input until you deem the answers sufficiently good. This might take just a minute for a simple estimation or a whole lifetime for a scientific discovery.

"Modern technology has dramatically shifted theeffective process because you don’t get stuckon[the Computational Thinking]helix roadway at step 3,so you may as well zoom up more turns of the track faster."

I think it's helpful to represent this iteration as ascending a helix made up of a roadway of the 4 steps, repeating in sequence until you can declare success.

While I've emphasized the process end of computational thinking, its power of application comes from (what are today!) very human qualities of creativity and conceptual understanding. The magic is in optimising how process, computer and human can be put together to solve increasingly tough problems.

Is this process of Computational Thinking that I describe connected with maths—or even one and the same subject; and what about coding? Talking education, there is very heavy overlap with our Computer-Based Maths approach, much less with today's traditional maths education; coding is an important element, in particular as the way in which you manifest abstraction.

Real-world maths—defining it and its applications broadly, as I do—absolutely relies on Computational Thinking but there are also specific areas of knowledge that maths is considered to contain (eg. particular concepts and algorithms), and which are often important to applying computational thinking to different areas of life. Maths is a domain of factual knowledge as well as the skills knowledge of how to process them.

"Computational Thinking is a mode of thinking about life

in which you apply a 4-step problem-solving process to ideas, challenges and opportunities you encounter"

Even in the real-world, this broad definition of the term “maths” may be alien to engineers or scientists who would consider what I’m describing simply as part of engineering or science respectively.

There’s another key difference too between a traditional maths way of thinking about a problem and a modern computational thinking approach and it has to do with the cost-benefit analysis between the 4 steps of the helix.

Before modern computers, step 3—computation—was very expensive because it had to be done by hand. Therefore in real life you’d try very hard to minimise the amount of computation at the expense of much more upfront deliberation in steps 1 (defining the question) and 2 (abstracting). It was a very deliberate process. Now, more often than not, you might have a much more scientific or experimental approach with a looser initial question for step 1 (like “can I find something interesting in this data”), an abstraction in step 2 to a multiplicity of computations (like “let me try plotting correlation of all the pairs of data”) because computation of step 3 is so cheap and effective you can try it lots and not worry if there’s wastage at that step. Modern technology has dramatically shifted the effective process because you don’t get stuck on your helix roadway at step 3, so you may as well zoom up more turns of the track faster.

"[Applying the Computational Thinking process] might take

just a minute for a simple estimation

or a whole lifetime for a scientific discovery."

A useful analogy is the change that digital photography has brought. Taking photos on film was relatively costly (though cheap compared with chemical-coated glass plates it replaced). You didn’t want to waste film, so you'd be more meticulous at setting the shot before you took it. Now you may as well take the photo; it's cheap. That doesn't mean you shouldn't be careful to set-up (abstract) it to get good results but it does mean the cost of misfires, wrong light exposure and so forth is less. It also opens up new fields of ad-hoc photography to a far wider range of people. Both meticulous and ad-hoc modes can be useful; the latter has added a whole new toolset, though not always replaced the original approach.

"Real-world maths absolutely relies on Computational Thinking"

Back to maths. However we term the real-world need, whether computer-based maths or computational thinking, what’s sadly all too clear is how today’s mainstream educational subject in this space of "maths" isn’t meeting the need. Its focus on teaching how to do step 3 by hand might have made sense when that was the sticking point in applying maths in life: because if you couldn’t do the calculating, you couldn’t use maths or in general computational thinking. Conversely, primarily gaining experience in a very deliberate, meticulous, uncontextualised, pre-computer application of the computational process—rather than a faster-paced, computer-based, experimental, scientific-style use on real problems—cannot continue to be maths’ primary purpose if the subject is to remain mainstream. Instead its primary purpose ought to be Computational Thinking—as it is in our CBM manifestation.

"Our aim is to build the anchor Computational Thinking

school subject as we explicitly broaden CBM

beyond being based in maths"

Like real-world maths, coding likewise relies on Computational Thinking but again isn't the same subject or (by most definitions) anything like a complete route to it. You need Computational Thinking for figuring out how to extract problems to code and get the computer to do what you want, but coding is the art of instructing a computer what to do, it's the expertise needed for being the sophisticated manager of your computing technology which includes speaking a sensible coding language, or several, to your computer.

What of other school subjects? Computational Thinking should be applicable to a very wide range. After all, it's a way of thinking—not the only way of thinking—but an important perspective across life. Whether in design (*“How can I design a streamlined cycle helmet?”*) or history (*“What was the key message each President's inaugural address delivered?”*), or music (*”How did Bach’s use of motifs change over his career?”*), every subject should envelop a Computational Thinking approach.

"The Computational Thinking approach needs

knowledge of what’s possible, experience of how you can apply it, and know-how of today’s machinery for performing it."

An important practical question is whether that can happen without a core educational subject of the learning of Computational Thinking itself? I don't think so, not at school levels anyway. That’s because the Computational Thinking approach needs knowledge of what’s possible, experience of how you can apply it, and know-how of today’s machinery for performing it. You need to know which concepts and tools there are to translate and abstract to in step 2. I don’t think you can only learn this in other subjects; there needs to be an anchor where these modern-day basics (learnt in a contextualised way) can be fostered.

Politically, there are two primary ways to achieve this: introduce a new core subject or transform an existing one. Either is a major undertaking, with coding and maths as the only possible existing school subject contenders for the transformational route. Maths of course is ubiquitous, well resourced and occupies a big part of the curriculum—but today's subject largely misses the mark. Coding is the new kid on the block, too narrow, not fully established and with far less time or money but with a zeal to go new places.

How does CBM relate? For the very short-term, simply as the start of today's best structured program for engendering computational thinking—one that's principally around maths but applied to problems and projects from all subjects.

Ultimately our aim is to build the anchor Computational Thinking school subject as we explicitly broaden CBM beyond being based in maths, and just as importantly being seen to be based only in maths. Look out for modules of CBM geography and CBM history!

Make no mistake. Whatever the politics or naming, whoever wins or loses—some day, a core, ubiquitous school subject in the space I'm describing will emerge. The first countries, regions, schools that manage this new core and its cross-curricular application will win big time.

]]>I'll leave my main blogpost to do the talking, but suffice it to say that I'm pleased there's a clean, powerful, modern way to put computation at the heart of the enterprise--what I call Enterprise Computation.

It's important for organisations to start to think now about how they manifest this new opportunity which will rapidly become a necessity--one driven particularly by data science.

]]>I am also sad that I never met Seymour. I even can't say when I consciously became aware of him or different strands of his work either. But his name has seemingly for ever been familiar, cropping up with increasing regularity and force in so many of the interests I've pursued, particularly fundamentally reforming maths education (our computerbasedmath.org or CBM project). So many routes in so many areas lead back to Seymour. I can't help but notice with some wry amusement this morning how constructionist an approach I have taken to learning about Seymour's life and work!

While I never met Seymour, I have very much enjoyed getting to know so many of those he knew and worked with from his daughter Artemis to Brian Silverman, Gary Stager to Mitch Resnick (and no doubt many others that I know but didn't know he directly influenced too). They have been extremely supportive of CBM, our recent Wolfram Language and Mathematica, of helping us to take Seymour's and their thinking, successes, mistakes and experience to try to shift to a much better educational world.

That this post is brief is an indication that others can much better enunciate the broad array of Seymour's achievements than can I. Indeed I am already learning more as they do.

However about one aspect I am quite clear. Seymour's death does not end any aspect I've come across of his vision. Quite to the contrary, we are instead starting to see its manifestation more than ever: a mixed computer-human approach, recasting of subjects, increasing pressure for change in education (frustratingly slow though it can seem). I expect Seymour's fame outside education circles will posthumously grow.

We will work hard to carry key strands of his work forward.

]]>This week's issue is significance arithmetic, similar to what you might know from school as significant figures. The idea is when you do a calculation not just a single value but bounds that represent the uncertainty of your calculation too are calculated. You can get an idea of how accurate your answer is or indeed if it has any digits of accuracy at all.

How important is the concept of significant figures to applying maths? And if useful, what of the mechanics of computing the answer? Is significance truly significant in concept and calculation for today? And therefore should it be prominent in today's maths education?

Here's our surprising conclusion. Significance arithmetic should be far more significant than it is in real-life maths today, rather like its role (if not all the detail) in maths education where it is covered fairly extensively. It would really help to get good answers, far fewer misinterpretations, a view on whether any of the numbers are justifiable. But people just aren't using it much in their maths lives.

Yes, paradoxically, I'm saying that this is a case in which traditional maths education got it right(er) and real-world maths didn't know what was good for it! Education is ahead!

So, what's gone wrong with significance out there?

Let's start with traditional notation. If you write down 3.2, it's in practice unclear if you're being precise at justifiably saying "2 significant figures" or if really those were what came to hand or all you could be bothered to calculate by hand. The notation (like so much of traditional maths notation) is ambiguous, causing huge confusion to practitioners and particularly students.

Then there's how calculators and computers usually operate. They churn out whatever their hardware "machine precision" is--often 16 or 19 digits--even if it has no justification from input or calculation. People ignore the digits or if transcribed, just think those quoting them are ill-educated (rather like the misuse of apostrophes suggest poor education in English).

When you use significance arithmetic there are several stages that can trip you up. You have to be clear what your input precision really is. What is the error I'm inputting (eg. does 2 digits represent this)? But then the calculations you do dramatically change what significance can be justified coming out. For example, a steep slope of a function reduces relative significance coming out (eg. to 1 digit, because a little variation in the x value results in a big y change), whereas a small gradient can increase justifiable significant digits (eg. to 4 digits). That means it really matters not only which functions you're computing with, but where you are on the function.

Screen-capped from an old video: Comparing how the output precision of a computation can vary widely even for a simple function where the input precision is being kept constant.

This is messy, time-consuming and tedious to handle by hand. And yet most numerical computation software (or hardware) doesn't do significance arithmetic.

The result is predictable: significance arithmetic is usually too hard to use so people don't bother. But rather than this cutting out significance in education, it cuts it out in the real-world but with the effect that lack of accessible computing usually has in education. Computing significance in and out of education has traditionally been much too complex to bother with.

Mathematica is an exception. Right from the beginning we've had significance (and big number) arithmetic. We invented a notation using ` to specify input precision; all relevant functions compute significance and output only what's justified. Some (eg. for numerical differential equation solving) even step up precision sufficiently during calculation to meet a goal you have set for output precision, assuming this isn't trumped by too lower a precision of input.

We have fun demoing a collapse of significance problem in Excel v. Mathematica. At each iteration the result should be 0.2 but the precision is constantly reducing. Excel goes way out after just 20 steps with no warning.

**Excel **does not track significant digits and very quickly produces nonsensical answers. Easy to spot the failure of significance in this simple example; potentially disastrous inside a more complex model.

Starting with 30 significant digits input to **Mathematica**, it tracks and displays reducing justifiable digits, until an error box alerts to a complete loss of significance.

We've picked on Excel here, but almost any numerical software will behave somewhat similarly on an example like this.

Yet if you use the right numerical tools for the job, significance arithmetic take on its true significance.

Back to CBM and how significance should or shouldn't figure in what's covered? Our view is that the failure of widespread adoption of significance in real-life isn't because its worth has been superseded or mechanised away, but to the contrary, because the computing (with some notable exceptions!) has yet to make it easy enough for the apparent benefit. That will change, and hopefully before our students graduate. So we're voting for more significant significance!

One final point. This is but one example of thousands across all areas of the maths curriculum that require deep rethinking for fundamental reform of the subject of maths. I hope it gives a flavour for why CBM is so hard a project to do thoroughly and why it requires so broad a view of the use of mathematics and state of computing technology.

]]>Why? Because the world's most transformative machines have been used for largely the wrong purpose in most classrooms: automating pedagogy not changing the subject taught.

Countries with the most attentive teaching are also likely countries where there is least pressure to computerise pedagogy for teaching today's school subjects. They do best in PISA because they are best at helping students through those subjects.

But this misses two fundamental points. Firstly, that, particularly around STEM and maths, computers changed the real-world subject, but computers have yet to change the school subject or PISA's assessments. We're assessing largely the wrong type of maths (hand-calculating not computer-based problem-solving) and noting that computers in the classroom haven't helped improve the results. Computers need to be used for doing the calculating, not for teaching students how to do hand-calculating. Secondly, that today's computers are machines you need to get familiar with − both just in operating them today and also in being able to adapt as they improve. Those skills aren't being assessed.

I think OECD should be praised for bringing up the correlation between technology usage and results, but − as we teach in computerbasedmath.org − they need to be careful to assign causality correctly. They need to move their PISA assessment quickly to a new domain of computer-based subjects to counter the problem, not suggest that the technology is redundant.

]]>Neither characterisation is true in my view.

What really seems to have spooked people is the psychological turnaround from apparently omnipotent Chinese government, able to command and fix at will, to a government that's apparently largely as financially impotent as any other.

Haven't we seen this same "country on a pedestal" culture before? The one that saw Japan fall from grace in the 1990s, the US in 2000s (along in a small way with the UK) and now China?

It's astounding that China has maintained its 10% growth rate for 30 years. But like all civilisations, it hasn't got all the answers. Nor had the US or Japan. But each of these eras did have substance, just not to the extent that everyone wanted to believe.

Is there a counterbalance to the human condition of generating stardom beyond rational views of reality?

Factual information that's accessible certainly helps. Really asking diverse questions of the data rather than taking a few people's perceptions as gospel at least can produce greater variety of viewpoints and allow everyone to ask questions they have.

But achieving this requires us to step up to the mark on computable data and on human ability to know how to question it. We need truly accessible interfaces to data and our people to have instinctive, innate ability to ask the right questions to uncover its significance.

Just my two cents (or was that yuan−even jiao or fen!) on how we can move from today's quasi-quantitative world to one where the richness of a real quantitative approach shines through.

]]>It's taken me a few days to realise that there were actually two very different "importance of evidence" conversations--one with which I completely concur, and one with which I vehemently disagree. In the end, what I believe this exposes is a failure of many in charge of education to understand how major innovation usually happens--whether innovation in science, technology, business or education--and how "evidence" can drive effective innovation rather than stifle it. In an age of massive real-world change, the correct and rapid reflection of this in education is crucial to future curricula, their effective deployment, and achieving optimisation for the right educational outcomes.

I'm going to call the 2 evidence utilisations "innovation-led evidence" and "evidence-led innovation".

The difference is whether you build your "product" (eg. phone, drug, curriculum) first, then test it (using those tests for iterative refinement or rejection) or whether formal evidence that exists from previous products becomes the arbiter of any new products you build.

The former--"innovation-led evidence"--is highly productive in achieving outcomes, though of course care must be taken that those outcomes represent your objectives effectively. The latter--"evidence-led innovation" almost by definition excludes fundamental innovation because it means only building stuff that past evidence said would work.

When you build something significantly new it isn't just a matter of formally assembling evidence from the past in a predictable way. A leap is needed, or several. Different insights. A new viewpoint. Often in practice these will occur from a mixture of observation, experience and what still appears to be very human-style intelligence. But wherever it comes from, it isn't straightforwardly "evidence-led".

I strongly agree with the late physicist (and friend of my brother's) Richard Feynman who explained nicely in one of his famous 1960s Caltech lectures how the scientific process works. I could summarise: Guess, Make a theory, Test it and compare with theory. (Film of this lecture exists--see the first minute!)

In the case of technology, "theory" is the product, in pharmaceuticals it's the drug and in education (for the most part) it's the curriculum.

"Evidence-led innovation" stifles major innovation--it locks out the guess--yet I firmly believe that that's what most of "evidence-led education" is talking about with painfully little "innovation-led evidence" applied.

I've faced this repeatedly with Computer-Based maths. I'm asked "do you have evidence it works"? I sometimes answer "where's your evidence that today's traditional maths education works? Have you done randomised control trials?".

As quickly as we can build curricula, fund their development and set up projects in different countries, we are starting to gather evidence. Something that slows this down is the need to have student assessments that accurately reflect required outcomes: it's not just a matter of comparing exam results before and after, open-ended computer-based maths assessments are needed too.

One problem with the "evidence-led innovation" crowd is that they often have no idea how hard it is to build something completely new. They think you can do micro-innovations, then test, then micro-innovate then test.

Actually so far CBM is the hardest innovation I've been involved in. It's been amazing to me just how different every aspect of the maths curriculum becomes when you do not need to assume hand-calculating. Equally amazing is how deep everyone needs to dig into their own understanding to uncover those differences, particularly since those involved have learnt maths traditionally.

You might ask whether now is the time for a new maths curriculum? Can we really take the risk? As guesses go, the idea that maths education should be the same subject as maths in the real world (ie. using mechanised computation) and not the current hand-calculating proxy is an extremely sure-footed one. The risk of not leaping with the real-world poses a very significant danger. Let's have the courage to develop and test CBM around the world, indeed more thoroughly than any maths curriculum has been tested before.

]]>

Clearly data science is a major, growing and vital field—one that's relatively new in its current incarnation. It's been born and is driven forward by new technology, our abilities to collect, store, transmit and "process" ever larger quantities of data.

But "processing" has often failed to elucidate what's important in the data. We need answers, not just analytics, we need decisions not just big data.

Computation in all its forms is a key to getting decisions from data. And funnily enough, it's not only for analytics that computation's used but for enabling human language data interrogation, interactive deployment and many other examples—crucial usability, not only raw computational power.

It's to bring all these aspects together that we're hosting a one-day summit in London this Thursday entitled "Mastering your data with the [latest, most powerful!] computation", that's with my [ ] editorial.

Most people don't recognise Wolfram as a key data science company. And yet over the last few years, we've built up and a unique and integrated technology stack not only to offer the most powerful computation on data but to optimise usability across the whole workflow and crucially to be a data science platform.

Now we have hosted the "Wolfram Data Summit" in the US for the last 5 years. The emphasis is a little different from our upcoming London summit though related: "A high-level gathering of innovators in data science, creators of connected devices, and leaders of major data repositories". In London, we'll be focussed on how new ideas can be deployed today in your organisation.

In the end, London will be a great follow-on, fill-out and extension to many fields beyond democracy from my TEDx at the UK Parliament kick-off talk earlier this year in which I addressed the question "has more data led to better decisions and better democracy".

Really hope we'll see you there.

]]>Traditional areas of maths like algebra, calculus or trig don't seem a good way to think about subdividing the subject in the modern world.

You might ask, why subdivide at all?

In a sense, you shouldn't. The expert mathematician utilises whichever maths areas helps them solve the problem at hand. Breadth and ingenuity of application is often key.

But maths represents a massive body of knowledge and expertise, subdividing helps us to think about different areas, for curricula to focus their energies enough that there's sufficient depth of experience gained by students at a given time to get a foothold.

However I believe the subdivisions should be grouped by modern uses of maths, not ancient divisions of tools.

So here goes with our 5 major areas:

- Data Science (everything data, incorporating but expanding statistics and probability).
- Geometry (an ancient subject, but highly relevant today)
- Information Theory (everything signals--whether images or sound. Right name for area?).
- Modelling (techniques for good application of maths for real world problems)
- Architecture of Maths (understanding the coherence of maths that builds its power, closely related to coding).

Comments welcome!

]]>I believe PISA is meticulous in conducting its tests and reflects a good evaluation of standards of today's maths education. And yet I think if countries like the UK simply try to climb up today's PISA assessment, they'd be doing the wrong thing.

The playing field of today's maths education is restricted to manual calculating procedures allied to the limited problem-solving that they can support. Today's mainstream real-world maths is much broader: applying the process of maths--using the best computational mechanisation--to much harder problems. The skills it requires are rather different, but if anything more conceptual, more intellectual and definitely more creative.

That's a playing field on which Brits and the like could do relatively much better than on the playing field of procedural hand-calculating. It's a playing field on which drilling kids for hours a day on their algebra isn't going to win.

Now let's be clear. I'm not saying that that's universally what's happening in Asia. In fact there's great innovation in the process of schooling and particularly the learning of maths in the region (famously Singapore). Nor am I in any way writing off Asian problem-solving ability which I think, correctly and creatively trained, could be top-notch too. What I am saying is that if Brits really put their minds to modern computer-based maths, they are just as able to compete with their Asian counterparts--whereas I don't think culturally we will do so well at drilling the needlessly pre-abstracted and often irrelevant current subject. I think that non-conformity, creativeness and looking around the rules is key to British (and many other Western) cultures and a great competitive strength if tethered appropriately, opposite to the cultural imperatives present in many of the countries performing well in today's maths PISA test, countries that may struggle to imbue such charactertistics.

And crucially, it's many of these abilities and the computer-based maths subject we desperately need in the workplace, and in life--not for the most part the subject we're largely failing to succeed at of hand-calculating procedures. (My recent talk opening the CBM summit at UNICEF details the argument).

A central question in all this is precisely what outcomes we wish for our students after their years of maths study? This is a question which we have been addressing from first principles in formulating CBM, unencumbered by constraints PISA necessarily has of not going too far ahead of today's curriculum and needing accurate quantitative assessment of it. For the brave, here is an early (hard to digest) draft which spans 10 dimensions. I won't detail all the ideas here but point out the importance of confidence, knowing how to operationally manage the application of maths, and understanding the separation between maths concepts (like significance) and use of a wide variety of specific tools (like a hypothesis test).

Intelligently ranking countries as PISA does is very helpful in pushing progress in education because succeeding at today's maths education or tomorrow's computer-based variety needs well-directed effort and focus and competition. But in the end, however well education is delivered, it must deliver the right subject.

Notice that our first CBM country Estonia is already high on PISA. They recognise that despite their achievements, they need to lead the change to maths. Actually, many of the countries near the top of today's rankings have been most active in pursuing the CBM approach.

Now the UK is doing well with Estonia in leading the coding education agenda. But why oh why does the UK government choose to separate coding in primary education from maths with which it should be so intertwined (as has the US)? They need to be closely associated as I pointed out last year. And it's particularly galling that they're not in the country where a mathematician invented the computer...

Playing badly on the wrong field is hardly smart. As the playing field shifts, let's lead the change, not be laggards at a game we can succeed well in.

]]>This really has at least 4-dimensions of consequence:

Firstly, it's a unique way to excite students about maths by marrying it up with coding. Coders will be able to use the power of Mathematica's maths out of the box, not only enriching what they can do but also showing off the power and importance of maths. Attaching maths to something already enjoyable to make it better and more enjoyable I think will be very encouraging in learning more maths. And you never know, politicians and policy-makers might even start to see the connection between coding