TUNDRA (Tracking Under-representation by Area)

TUNDRA london


First Thoughts and Observations of a new metric.

By Karina Berzins

The Office for Students (OfS) have just released a new experimental metric to examine HE participation geographically. While we are all familiar with the POLAR measure, TUNDRA is a similar measure, but with a number of differences which I will outline below.

Firstly, the OfS is right to release a new metric to examine area based participation, given the methodological flaws with POLAR4, in particular with the underlying population data. These figures were calculated using population estimates, which resulted in 8% of London MSOA’s (Middle Super Output Areas) having more than 100% participation rates. Of course, this is not possible and while these areas were pushed down to a 100% rate for the construction of the quintiles, this level of inaccuracy is extremely problematic given that POLAR4 is not only used as a metric for Universities APPs (Access and Participation Plans) but also as a metric for millions of pounds of NCOP (National Collaborative Outreach Programme) funding.

The greatest strength of TUNDRA in terms of its methodology is a much more rigorous accounting for the base populations. Here, data is collected from the Department for Education for all Key Stage 4 learners at mainstream state funded schools. The years of collection for these learners’ home addresses are 2010 to 2014, who are then picked up in the HESA (Higher Education Statistical Agency) data (using fuzzy matching) for the academic years 2012/13 to 2017/18. This of course brings TUNDRA much more up to date than POLAR4 which went up to the 2014/15 academic year.

So far so good. We have a much more credible base population, and an equally rigorous participation dataset. Of course, not all learners attend mainstream state education. In 2016 there were 518,432 attending Independent Schools, which represents approximately 7% of all learners. (see ISC Key Figures Report here).  While this is a minority, we do know that these learners are over-represented in Higher Education overall, and in particular at Oxbridge and Russell Group institutions. Due to this I would suggest that the OfS consider working with the independent school sector to include these learners in the dataset.

I do also have slight ideological objections to the decision to only include mainstream schools in the analysis – as this seems to pre-suppose that learners from special schools, or Pupil Referral Units are not going to enter Higher Education. In terms of Widening Participation (which is at the heart of why we develop all these metrics) this seems to me to be opposite of what we should be doing. Surely our task is to include these learners, and develop ways to encourage them into Higher Education if they choose to do so. I for one would welcome a participation measure that looks at those learners from PRUs, Special Schools and Secure Units.

While TUNDRA so far really does seem superior to POLAR4 due to the base populations, for me the England only focus is another weakness. POLAR4 was the first iteration of the measure that included all nations, so restricting TUNDRA to England only feels like a step backwards.

Finally, the TUNDRA method was sensitive to those MSOAs with a very low base population and suppressed areas with lower than 50 participants. In all there were only 27 of these. This is also an improvement on POLAR where it has been calculated that an individual in an MSOA with fewer than 50 base population can influence the overall calculation by 2 percentage points. The OfS has released a methodological note on this here. 

For me, the test of any new geographic based metric is to look at London – a large city with a lot of population movement, and a mix of wealthy and poor areas, often in close proximity. By examining the TUNDRA map (which is very slow to navigate) alongside the POLAR4 map, some interesting differences emerge.

TUNDRA london


POLAR4 London

It seems that by not including the independent schools, parts of West London move downwards in terms of participation. This is an interesting development, and may well lead to more accurate analyses of WP cohorts and their participation rates.

Of course all these are just first thoughts, and I will be looking at the underlying data set and comparing it with POLAR4 to see which areas have significant changes between the two methods. This will be the subject of my next blog post.

Overall – this is a good move by the OfS and at first glance TUNDRA seems to be a much more rigorous and robust metric. One wonders now though, when (if at all) will it replace POLAR4, and how will this affect NCOP funding and APP reporting for institutions.

I would love to hear what you think – please leave a message below.




This entry was posted in Uncategorized. Bookmark the permalink.

2 Responses to TUNDRA (Tracking Under-representation by Area)

  1. Jess Brown says:

    Found this an interesting read and a nice simple break down of the differences between POLAR and TUNDRA – I too am interested in how this could affect NCOP funding. Looking forward to your next blog post.

  2. Naomi Clements says:

    The decision to leave out SEND schools and PRUS is interesting when Phase 2 of NCOP and outreach hub work includes them. I agree that the exclusion has several ideological issues and would support their inclusion in the data – likewise virtual school/college progression data would be useful too. It would be interesting to know if over the next 3-5 years this tool will be further developed and HEPs asked to use TUNDRA in yearly APP reports? (Also, as a Mighty Boosh fan, TUNDRA is a great name – https://www.youtube.com/watch?v=t3rIcCdI0-g )

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s