Prepared by Willem de Vries, Deputy Director-General.
The opinions expressed in this paper, however,
are personal and do not necessarily reflect Statistics
Netherlands' position or policies. The author
thanks colleagues of Statistics Netherlands for
comments on an earlier version, as well as the
International Statistical Review's referees for
their valuable comments and suggestions to improve
attached paper, "Are we measuring up ..
? : Questions on the performance of national
statistical systems", is intended to assist
discussion at the Open Forum on the Development
of Performance Indicators for National Statistical
Offices, to be held on 26 November 1999.
The paper was prepared by Willem de Vries,
formerly Deputy Director-General of Statistics
Netherlands, and currently Deputy Director
of the United Nations Statistics Division.
It appeared in the April 1999 edition of
the International Statistical Review of
the International Statistical Institute,
and served as an invited paper at the forty-seventh
plenary session of the Conference of European
Statistics, held in June 1999.
proposes a systematic approach to evaluating the
performance of national statistical systems. Its
starting point is the so-called Fundamental Principles
of Official Statistics, which were adopted by
the United Nations some time ago. The aim is to
translate the principles into operational terms
and concrete questions about 'how we are measuring
The rankings (or league tables,
as they were called) of national statistical offices,
published by the newspaper The Economist some
years ago, caused mild shock-waves among official
statisticians around the world. The first Economist
ranking (1991) was primarily based of the timeliness
and accuracy of some major statistical series.
The second round, in 1993, also took into account
judgments of chief government statisticians about
the objectivity of statistics (in terms of absence
of political interference), reliability of the
numbers, the statistical methodology that was
applied and the relevance of the published figures.
The appreciation of the ratings
of course varied. The national statistical offices
which were mentioned in the Economist's list were
more or less pleased, depending on their relative
position. Offices not in the list wondered why
they had not been mentioned. Some offices argued
that their rating was questionable or incorrect,
because the information used had been incomplete
or outdated. There was, however, little discussion
about the criteria The Economist had used, even
though there was fairly broad agreement that the
assessment had been somewhat superficial.
From The Economist's point of
view, as a newspaper primarily voicing the interests
of the users of macro-economic statistics, the
applied 'objective' criteria (average size of
revisions to GDP-growth, timeliness, and value
for money in terms of number of statisticians
per 10.000 population as well as the government
statistics budget per head of the population)
made good sense. Adding senior statisticians'
views to these criteria was perhaps not a bad
idea either. However, it was clear to most insiders
that the overall ratings at best presented an
incomplete picture. In this paper a more comprehensive,
systematic checklist of points to be considered
in evaluating a national statistical office or
national statistical system is proposed. Theoretically,
a distinction may be made between 'system' and
office. In countries with a decentralized statistical
system, the 'system' consists of a collection
of 'national statistical offices'. In this article
I am always referring to the systems as a whole,
also where I use the term 'national statistical
office' or 'institute' (NSI for short, which is
the commonly used international term). Obviously,
measuring the performance of a 'system' may be
more complex than measuring the performance of
single 'offices', but this article is not so much
about the technicalities of measuring.
It is mainly based on the so-called
Fundamental Principles of Official Statistics,
first adopted by the Economic Commission for Europe
during its 47th session, Geneva, 15 April 1992,
and subsequently endorsed by the United Nations
Statistical Commission (after some minor amendments).
These 10 Principles are a now a widely agreed
framework for the mission of national statistical
offices and indeed also for the statistical work
of official international organizations.
After quoting the official wording
of each of the Fundamental Principles of Official
Statistics, a brief explanation in simple words
of the essence of each Principle will be given.
In addition, I have tried to make the principles
more operational by raising some questions about
them. The answers to those questions should indicate
whether and to what extent a principle is adhered
to in a given NSI. The paper does not discuss
all aspects of each of the Principles in any depth.
It only raises some points which are thought to
be of key interest. Some Principles (e.g. the
one on confidentiality) involve so many complex
issues that they may be (and indeed sometimes
are) the subject of full-fledged conferences of
experts. Another thing it does not do, is discussing
measurement issues (in other words: how to 'score'
on the questions) in a strictly quantitative sense.
Relevance, impartiality and equal access
1. Official statistics provide an indispensable
element in the information system of a society,
serving the government, the economy and the public
with data about the economic, demographic, social
and environmental situation. To this end, official
statistics that meet the test of practical utility
are to be compiled and made available on an impartial
basis by official statistical agencies to honor
citizens' entitlement to public information.
In other words: Principle 1
means that official statistics should be relevant
for society, compiled in an impartial manner,
be free from political interference and be accessible
for everyone under equal conditions.
One of the reasons why Britain
and the USA were rated relatively low (despite
their good performance in other respects) by The
Economist in 1993 was: 'the lingering suspicion
that statistics in America and Britain are subject
to political meddling'. Despite recent moves towards
more centralization of official statistics in
Britain, a large part of the statistical work
is still scattered among about 30 government departments,
where the statisticians report directly to ministers.
This (wrote The Economist) 'allows politicians
to take an unhealthy interest in statistics...'.
There are several questions
to be asked in judging national statistical offices
against the background of the principles of relevance,
impartiality and equal access.
The ultimate question pertaining
to relevance would of course be: to what extent
do the users think that the activities (data collections,
or ultimately outputs and products) of statistical
systems are relevant for them? It is, however,
extremely difficult to express this aspect of
'user satisfaction' in terms of one or a few a
simple indicators (which does not mean one should
not try to do so). Some users may consider some
activities to be very relevant (while others may
not), and may be very dissatisfied with other
activities (much liked by others). Therefore,
I would propose a more general question which
has to do more with the general attitude of NSIs
in this regard than with concrete indicators or
measures. That question is:
1. How well developed are mechanisms to ensure
that statistical work programs are relevant for
the various user groups?
In many countries, there is
something like a national advisory board for statistics,
but whether this works satisfactorily or not is
a different matter. In addition there are, however,
many other possible mechanisms to foster the relations
between users and producers of official statistics.
The basic question to ask here is: are national
statistical offices making a real effort to find
out what their users need and to adapt their statistical
programs accordingly? And the next question would
be: how flexible are they in practice when it
comes to tackling 'new' (and probably quite relevant)
subject matter areas such as the services sector,
the environment, the 'information technology sector'
and other matters relating to the economy of the
'intangibles', and last but not least 'the global
economy' (including phenomena such as foreign
direct investment and correct measurement of the
activities of multinationals in general).
Another, more specific question
regarding 'user satisfaction' would be:
2. How satisfied are users with statistical
products and their dissemination?
Apart from statistical programs,
which often describe what statistical offices
are doing or are planning to do in terms of the
subject matter areas to be covered, the content
and coverage of data collections, and sometimes
the methodology to be used and the timing and
expected quality of statistical results, there
are also the actual statistical outputs to consider
and how the users appreciate these: news releases,
printed publications of various kinds, data in
electronic formats, including data bases etc.
In other words: do statistical offices have a
well-developed dissemination system? Are the statistical
products what the users want in terms of quality,
timeliness, price, distribution modes? Are sales
of statistical products increasing or declining?
Is there any real, systematic marketing effort?
One may argue that this is a tricky question;
what if an NSI is very active in marketing and
measuring user satisfaction, but is getting poor
results (i.e. low user satisfaction) in return?
My assumption is, however, that an NSI which shows
such real user orientation will ultimately almost
unavoidably improve its performance.
As to impartiality, the question
3. How well do national statistical offices
adhere to their obligation of impartiality?
This may sound relatively simple,
but in fact there are rather complex issues at
stake. The complexity largely depends on one's
general notion of 'impartiality'. Very orthodox
official statisticians may believe that even undertaking
a survey at the special request of a ministerial
department may affect the impartiality of a national
statistical office, especially if this department
(usually paying for the extra work to be done)
wants to have a say in the methodology of the
survey. However, most statisticians may tend to
interpret 'impartiality' more loosely as: avoiding
to take any partisan view in the choice of definitions
or methodology, and, most particularly, avoiding
a partisan stand as to the release of statistical
numbers and commentary on those numbers.
Most national statistical offices
have a strong tradition of avoiding to make any
non-statistical comments on their numbers. Sometimes
this principle is adhered to very strictly. In
a press release about the latest unemployment
numbers, the comment given will then be restricted
to something like: 'Compared with the previous
quarter, unemployment has decreased by 0.7 percentage
points', leaving any additional comments to politicians
and others. Nowadays, as many statistical offices
wish to improve press coverage of their numbers,
some may comment as follows: 'The decrease of
unemployment in this quarter was 0.7 percentage
points compared with the previous quarter. This
is the strongest quarterly decrease since the
second quarter of 1982'.
As a general principle, however,
statistical offices should (and most will indeed)
avoid to make any comments referring to the success
or failure of government policy, even if the numbers
may seem obvious in revealing this. It may be
argues that NSIs doing 'deep analysis' of data
cannot always avoid making 'non-statistical comments',
but I think that such analytical publications
should be regarded as being distinct from regular
As far as the issue of 'political
interference with statistics' is concerned, the
question to ask is:
4. How well are statistical offices shielded
from political intervention as to the content
and the release of statistical results?
Some of the most common forms
of unwanted political intervention seem to be:
Pressure to change definitions
in order to obtain statistics which put government
policies in a better light
Tampering with the release
of key statistical numbers, in order to select
a moment for release which is politically
favorable or least damaging
Interference with statistical
releases to the extent that some or all of
statistics are not released (perhaps this
refers not so much to centralized NSIs as
to statistical activities of ministries in
decentralized statistical systems).
Discontinuation of series
(through budgetary or other means) that might
prove statistically embarrassing
Leaking to the media
of 'favorable' statistics by politicians before
the data are made available for everyone
Apart from the first category
(for which it is hard to formulate general rules
of good practice), the highest risk of political
interference with statistics therefore occurs
at the stage when numbers are (about to be) released.
To avoid tampering with releases of fresh statistical
numbers, many countries have now adopted a system
of announcing release dates of key statistics
well (a month or even a year) in advance. Avoiding
leaks may prove to be more difficult. There is
the custom in many countries to give ministers
a head start as to fresh key statistics by supplying
them with the numbers some time before these are
officially released. This may be anything from
an hour to several days and the list of recipients
of these 'pre-releases' may be quite extended.
There is general agreement among statisticians,
however, that it is commendable to restrict both
the list and the time lap as much as possible.
It is generally agreed that pre-releases should
ideally be restricted to a very small number of
ministers and senior civil servants who have to
be able to respond to questions by the media immediately
after the release of the statistics and that the
lead time they get should be in the order of at
most a few hours.
As to 'equal access' the question
5. How well is the principle of 'equal access
under equal conditions' adhered to?
Apart from the political considerations
under the previous point, there is also the general
principle of safeguarding that all users are treated
equally. Some aspects of this equality are not
trivial. Obviously, for some numbers a head start
of minutes, for one user over another, may generate
a considerable (financial) advantage. Therefore,
statistical offices have to find ways to give
all users access to fresh numbers at virtually
exactly the same moment. Apart from recently developed
possibilities of simultaneous electronic distribution
(e.g. by e-mailing statistical releases to the
media), some countries are using a system of 'lock-ups'
for the release of certain sensitive numbers.
Under the lock-up system, members of the press
are literally locked up in a room, some time before
the moment of pre-announced release of the statistics.
The journalist is then given the statistics to
enable them to compose their article or message.
The room is equipped with computer facilities
and telecommunication equipment. However, telecommunications
are of course blocked until a central switch is
Another aspect of equality is
that, in principle, all users should pay equal
prices for the same statistical products and that
the number of 'privileged users' who get the data
free of charge (government agencies, members of
parliament) should be restricted as much as possible.
A slightly distinct point, which
is not covered by the principle of 'equal access'
as such, but which is nevertheless very essential,
is the notion that official statistics are (intended
as) a public good, which should in principle be
freely available for all citizens. Most NSIs put
this notion into practice through various means.
First of all, as mentioned before, building up
good relations with the media is important to
serve the general public with basic statistical
information. Secondly, it is a generally accepted
practice that NSIs make arrangements that the
most important statistics are freely accessible
in their own libraries and in university and public
libraries. Thirdly, most NSIs will give free information
over the telephone (including follow-up by sending
free copies of tables etc. by mail) or by electronic
channels, such as the Internet. Discussion on
how far 'free' should go, however, are still inconclusive.
Some argue that all available statistics should
be supplied free of charge on the Internet, others
think that only some basic information should
be free, while for further details some charge
should be paid. The second point of view would
be consistent with the most common practices followed
nowadays for printed material: limited sets of
material (e.g. some photocopies) are free, users
who want more have to pay the marginal cost of
the data carrier plus shipping and handling, occasionally
even something for extra work that may be necessary
to compile alternative tabulations etc.
2. To retain trust in official statistics,
the statistical agencies need to decide according
to strictly professional considerations, including
scientific principles and professional ethics,
on the methods and procedures for the collection,
processing, storage and presentation of statistical
Principle 2 simply says that
official statistics should be compiled by using
professional methods and also that statistical
results should be presented to the users in a
The real issue here is: to what
extent is the professional integrity of NSIs safeguarded?
Measuring professionalism and the adherence to
professional ethics and even more so: comparing
these characteristics between national statistical
offices is obviously very difficult. On a subjective
level, there may be some agreement among statisticians
that national statistical office X or Y is relatively
active in terms of methodological innovation in
this or that area, but agreeing on some objective
measure is an entirely different matter. The number
of university graduates and their percentage share
in the total staff of a national statistical institute
may be an indication of its 'methodological potential',
as may the number of research and methodology
papers that is being produced and published in
respected scientific journals, but few would agree
that this is a sound basis for comparisons between
different statistical offices.
However, some general questions
may be asked to assess (the focus on) professionalism
in national statistical offices.
6. How well is professionalism systematically
promoted and shared by such mechanisms as analytical
work, circulating and publishing methodological
papers, and organizing lectures and conferences?
7. Are statistical methods well documented
and are methodological improvements made on the
basis of scientific criteria?
8. Are decisions about survey design, survey
methods and techniques etc. made on the basis
of professional considerations (or do other -
e.g. political - considerations play a role)?
9. Is training and re-training of professional
and other staff a real policy issue for the organization
and is enough effort (e.g. in a percentage of
the overall budget) spent on training?
10. Is statistical quality management a real
policy issue and are real and systematic efforts
(including the promotion of well-documented quality
management guidelines) made to enhance the quality
As to the aspect of 'professional
presentation' of statistics, some comments were
already made under 'impartiality'. Some other
points will be made under the next paragraph on
3. To facilitate a correct interpretation
of the data, the statistical agencies are to present
information according to scientific standards
on the sources, methods and procedures of the
Accountability is understood
in the sense that statisticians should systematically
and thoroughly explain to the users of statistics
what the numbers exactly represent and what their
To some extent this principle
may seem trivial, but considering that the issue
has long been (and still is) a topic for lively
debate among statisticians, there are some non-trivial
aspects involved as well. The triviality lies
in the fact that it is obvious that when one produces
and publishes numbers, one should inform the user
in some way what the numbers are about. The debate
is on how to do it in the best possible manner.
This debate of course also includes the question:
how far and how deep to go in this respect? Experience
shows that some users are very deeply interested
in 'what's behind the numbers', while others,
so to speak 'couldn't care less'.
In terms of so-called meta-data
(information about the data, i.e. definition of
the population covered, definition of the variables,
description of the data sources used, description
of survey methodology, etc.), there is broad agreement
that it is essential for the users of statistics
to have access to as complete a set of meta-data
as possible. Therefore, national statistical offices
should see to it that full descriptions of the
complete methodology for all their collections
are documented and kept up-to-date. This does
not imply, obviously, that all statistical publications
must contain a full set of meta-data, because
that would be both impractical and user-unfriendly.
Statistical databases, however, should preferably
contain all the meta-data in some user-friendly
form, because it would be a burden for the users
to have to consult separate publications to see
what the data are worth. A special point of concern
is to ensure that the data elements in a time
series are consistent and if not, to inform the
users clearly about the exact nature of any inconsistencies.
A good example of meta-data
are the 'Sources and Methods' accompanying the
OECD publications about Short Term Economic Indicators.
Also, the initiative taken by the International
Monetary Fund in 1996 to set standards (general
standards for all countries, plus so-called special
standards for the most developed countries) for
meta-data about a set of major statistical series,
must be mentioned in this respect. A large number
of countries have now endorsed these standards.
The question to be asked with
regard to meta-data is therefore:
11. How well does a statistical office provide
the users with information about what the data
really mean and about the methodology used to
collect and process them?
Another issue, which is closely
related to the previous paragraphs on meta-data,
but which is nevertheless slightly different,
is how statistical offices inform the users about
the quality of the data they produce. Proper meta-data
may tell a lot about the quality of statistics
(at least for 'professional' users), but they
do not give the whole picture. Therefore, though
there may be a certain overlap between the two,
explicit statements about the quality of statistics
are an additional aspect of principle 3. Quality
in particular concerns such aspects as sampling
and non-sampling error, any biases the data may
have, information about non-response and its treatment,
about imputations etc. In the eighties, the Conference
of European Statisticians of the United Nations
Economic Commission for Europe has adopted 'Guidelines
for quality presentation', which are still very
useful and are applied in some form or other,
but often not systematically, by quite a few statistical
offices. The question is therefore:
12. How well developed and applied is the
presentation of the quality of statistics?
Prevention of misuse
4. The statistical agencies are entitled to
comment on erroneous interpretation and misuse
Principle 4 means simply that
statisticians may react to any wrongful use of
statistics that they perceive. Although the official
wording of the Principle is 'entitled', the general
understanding of the Principle is, that statistical
agencies indeed have a duty to comment.
There are of course many different
ways to define 'erroneous interpretation' and
'misuse' and not all forms of these are equally
bad or harmful. Moreover: most instances of misuse
will escape the attention of statistical offices.
Many users know 'how to lie with statistics',
but this need not always be a concern for statistical
However, there are some kinds
of misuse where corrective actions may be required:
in particular misuse by government agencies and
misuse by the media. For both categories of misuse,
it is commendable for statistical offices to undertake
immediate corrective actions in whatever way.
In Statistics Canada it used to be (and probably
still is) standard policy that when any misrepresentation
or misinterpretation of official statistical numbers
in the media was noticed, the Chief Statistician
wrote a letter to the editor explaining that a
mistake had been made and how the numbers ought
to have been correctly presented. Similar steps
were also taken for government misuse. It was
felt that this general attitude has had positive
effects as to the 'education of important users
of statistics'. It may be argued that the Fundamental
Principle in question is perhaps too defensively
worded and that the real issue is that NSIs, more
in general, should make an effort to educate and
train the users, not so much in order to prevent
misuse, but to promote the best possible use.
So, while it may be difficult to prescribe a standard
recipe for these situations, the general question
that may be asked is:
How well and systematically do statistical offices
educate their key users in order to promote proper
use of statistics and to prevent misuse?
5. Data for statistical purposes may be drawn
from all types of sources, be they statistical
surveys or administrative records. Statistical
agencies are to choose the sources with regard
to quality, timeliness, costs and the burden on
Principle 5 means that statistical
offices must try to be as cost-effective as possible
by making the best choice of sources and methods,
aiming at improved timeliness and also data quality,
at spending tax-money as efficiently as possible
and at reducing the response burden.
To some extent, possibilities
to achieve cost-effectiveness depend on national
circumstances. In countries where there are good
administrative registers which are available for
statistical use as well, the need to have censuses
or indeed traditional sample surveys will be less
than in countries where such registers do not
exist, are of poor quality or are not put at the
disposal of the statisticians. One of the most
eloquent examples of how the national administrative
infrastructure affects statistical expenditure
very directly is the population census. Whereas
in countries which do not have a population register
(such as the United States) very costly periodic
population censuses remain necessary, other countries
(such as the Scandinavian countries and the Netherlands)
nowadays produce very much the same statistics
that were previously collected through a census
by using registers and some additional sample
surveys, at a mere fraction of the cost.
In terms of data input, making
the best possible, balanced choice of data-sources,
given national circumstances, should therefore
be an important issue for all statistical offices.
This includes the obligation of statisticians
to permanently identify administrative sources
(registers and others) that may be used for statistical
purposes, as well as try to develop such sources
by aiming at having their content and quality
adapted and enhanced.
The general question to be asked
14. How well considered is the 'data sources
mix' that is used by statistical offices and is
achieving the best possible mix (also taking cost-effectiveness
into account) a subject of systematic improvement
In the different phases of data
throughput (the data editing process, aggregation,
analysis etc.), there are also many possibilities
to increase timeliness, efficiency and/or to improve
data quality. There are organizational issues
to be considered, as well as methodological and
technological aspects and many of these issues
and aspects are inter-related. For example: introducing
macro-editing instead of the more traditional
micro-editing approach is only possible when statisticians
are well-trained in this new approach and can
make use of advanced information technology (software
and hardware). It is impossible to briefly give
some general guidelines, but the central question
here seems to be fairly straightforward:
How effective and efficient is the data throughput
in statistical offices, in terms of organization,
methodology and technology?
And an additional question of
perhaps equal importance may be:
16. Is improving timeliness an issue of serious
and systematic effort?
The response burden generated
by statistical offices is another aspect of their
cost-effectiveness, because data collection, apart
from the spending of taxpayers' money, also implies
costs for data providers. Therefore, reducing
the response burden, in particular for data providers
from the private sector, is nowadays an issue
of concern in many countries. There are many different
techniques to reduce the response burden, some
of them fairly simple, others of a more 'high-tech'
nature. It should be noted here that the issue
of response burden is first of all relevant for
well-developed statistical systems, and probably
much less so for developing countries.
Comparison of the level of response
burden generated by different statistical offices
is very difficult, because the response burden
depends on a lot of factors, many of which are
related to very specific national conditions and
requirements. It is possible, however, to compare
the overall development (upwards or downwards)
of the response burden, as well as the general
attitude of statistical offices with respect to
the issue. A general question that could be asked
17. How successful has a statistical office
been in systematically reducing the response burden
it imposes on data providers?
Cost-effectiveness is obviously
also a matter of organization, management and
even 'corporate culture'. It is very difficult
to measure the 'productivity' of statistical workers
and even more so to compare 'productivity' between
different statistical offices. Efforts to compare
the cost of specific, rather comparable statistical
operations (such as the Labor Force Survey or
the Consumer Price Index) in a few countries of
the European Union have so far been unsuccessful.
Because better standards
to measure productivity and cost-effectiveness
in statistics do not exist, The Economist was
probably right in defining a couple of simple
indicators to compare these issues between countries.
Therefore, it is proposed to stick to those indicators:
number of official statisticians per 10.000 population
and the government statistics budget per head
of the population. The Economist used the cost
of statistics as such more as a background variable
than as a performance indicator in its own right.
Performing well at a relatively low cost was of
course regarded as an additional positive feature
of a statistical system. It may be argued that
the Economist 'formula' is unfair for smaller
countries and that something like statistical
budget/Ö population would be a more adequate
measure. For countries which have a decentralized
statistical system, the numbers should of course
include both the central and the decentralized
parts of the system. The problem is, of course,
that the question 'how are we doing in this respect'
can only be answered if comparable data for other
countries are available. Nevertheless, the question
may be asked: how cost-effective is a national
statistical system (in terms of relative cost
indicators such as statisticians per 10.000 population
and statistics budget per head of the population)?
However, the question is fraught with problems,
such as: an NSI being 'low-cost' may be understaffed
rather than efficient. Therefore, the question
may also be phrased as:
18. Is the NSI regarded as being an efficient,
well-managed organization by its own government?
6. Individual data collected by statistical
agencies for statistical compilation, whether
they refer to natural or legal persons, are to
be strictly confidential and used exclusively
for statistical purposes.
Again, this seems to be a very
simple principle, but it has many ramifications,
some of which may involve very complex issues.
There is a well-known joke, often being told in
countries which had a centrally planned economy,
but are now moving towards a more market-oriented
economic system. It is: 'In our country, individual
data used to be widely known, while aggregates
always were top secret'. This clearly illustrates
how the Principle of confidentiality should not
be interpreted and applied. Unfortunately, it
does not say much about how it should.
There are diverse questions
to be raised about the concepts 'individual' and
'confidential'. The interpretation of the concepts
may also vary from country to country. However,
one should first of all consider what the true
meaning of the principle is: self- interest of
statistical offices. The simple reason why statistical
offices must adhere to confidentiality of individual
data is that it is the only way to safeguard the
trust of the respondents. Respondents must be
certain that the information they give is used
for statistical purposes only and that they therefore
have no interest to supply anything but true data.
One may look at the issues from
various angles. At the general policy level one
may take into account what the law (if any) says.
In many countries there is legislation about the
protection of the privacy of citizens. This often
includes provisions for statistics as well and
these provisions may be strict or less strict.
In The Netherlands e.g., the general 'personal
data protection law' makes some exceptions for
statistics and research, in the sense that data
files which are kept for statistical or research
purposes only, are not subject to the general
rule that individuals are entitled to check what
is registered about them in the files and to correct
this information if they so wish. Equally, the
confidentiality of individual business data is
often safeguarded legally, be it in a general
statistics law or in separate legislation. However,
in this respect there may be some more or less
essential differences between countries, in particular
as far as the legal possibilities for exchange
of company data between various government agencies
At a more basic and practical
level, it seems that most statistical offices
have some official policy, or at the very least
an accepted practice about how to prevent disclosure
of individual data in disseminating their statistical
products. A distinction may be made here between
disclosure protection in the case of traditional,
printed publications, and the more complex issue
of disclosure protection with respect to electronic
files with micro-data. For printed publications,
the rules are (in practice) often relatively simple,
such as (in particular in the case of business
statistics) suppressing cells in tables which
contain information about just a few (e.g. three
or less) individual entities.
For electronic files the rules
may be more sophisticated, in particular when
it comes to so-called micro-data: files containing
(anonymous) information about individual entities.
In several countries (e.g. in the United States)
such files are made generally available for research
purposes: so-called public data files. The structure
of these files is such that disclosure of individual
data is considered to be virtually impossible.
A variety of techniques is applied to protect
disclosure. In The Netherlands a distinction is
made between such public data files and another
type of micro-data: research-files which are not
100% 'disclosure-proof', and which are only made
available to certain categories of researchers
and under very strict legal provisions.
So there are some general questions
to be asked:
19. How well developed and practiced are the
rules to prevent disclosure of individual data
in printed and electronic publications?
20. How well developed are techniques and
systems to make statistical files available for
research purposes, while preventing disclosure
in the best possible manner?
Another issue regarding confidentiality
is the prevention of non-statistical use of statistical
data and guaranteeing administrative immunity
of respondent groups. This is a rather complex
problem area. When the draft of a Regulation for
Community statistics (better known as the 'European
Statistical Law') was discussed by the member
states of the European Union, prolonged debates
took place about the definition of and wording
around such concepts as 'statistical data', 'use
for statistical purposes' and 'non-statistical
Yet another issue related to
the confidence of citizens in the national statistical
office concerns the perception of the public that
databases and networks within these offices are
in practice secure against external intrusions
(by 'hacking' or otherwise). In Statistics Netherlands
great care is taken to 'waterproof' the internal
systems from the outside world.
It is suggested not to include
all these points, however relevant and even important
they may be, in the 'performance indicator system'
which is the subject of this paper.
7. The laws, regulations and measures under
which the statistical systems operate are to be
Principle 7 means that the position
of statistical offices, including their rights
and obligations should be codified in proper,
publicly available legislation, in order to show
to the public what it may expect from the national
It is impossible to set out
very specific rules for statistical legislation.
Much depends on the legal culture and traditions
in countries. Many countries have a formal 'general
statistics law', but in others the statistical
legislation may be scattered over a series of
specific laws and various other government documents.
Neither situation, however, is a guarantee that
official statistics are in good shape, because
it is useful to note here, that laws obviously
cannot solve all problems. In some countries which
do not have a 'general statistics law' (e.g. The
United States or the United Kingdom), many of
the best possible statistical practices may be
adhered to, while other countries may have a statistical
law which is perfectly formulated, but in practice
is not much more than just another piece of paper.
Nevertheless, the agreed view is that having a
general statistics law is the preferred situation.
Nevertheless, it is suggested
that statistical legislation and/or other legislation
which is also relevant for official statistics,
should cover all or most of the following basic
points, in order of importance:
The general position of
the national statistical office/system (including
points such as who decides on the work program,
who decides on methodological issues, how
are data collected, what are the relations
between the national statistical office -if
any- and other government agencies doing statistical
work, what are the relations between the statistical
system and the government/parliament etc.)
The position of the head
of the national statistical office/system
(including points such as who appoints and
dismisses, to whom does the 'national statistician'
report and about what, does he/she have any
specific responsibilities etc.)
Basic rules of data collection
and confidentiality (voluntary and statutory
data collection, any penalties for non-compliance
with compulsory data collections, general
and specific confidentiality rules)
In view of this, the question
to be asked about statistical legislation may
21. How good is the statistical legislation
in a country, in terms of clearly setting out
the mission and the competencies of statistical
agencies, the protection of their statutory independence,
legal obligations to provide information for statistical
purposes and the protection of confidentiality
of individual data?
In addition, some implementation
aspects of statistical legislation or of the principles
for good statistical conduct are to be taken into
account when it comes to the 'performance' of
statistical systems. In particular, it is generally
considered to be not more than sensible and decent
when respondents are always properly informed
about the legitimate basis for statistical data
collections and other activities of statistical
agencies, for instance by briefing them explicitly
about the statutory or non-statutory nature of
data collections. In the longer run, this is once
again a matter of self-interest: 'honesty is the
best policy'. A special issue in this regard is
'informed consent' of respondents as to any use
of the provided (individual) information for non-statistical
or research purposes.
The question to be answered
How well developed are the policies and practices
of dealing with respondents, in terms of ensuring
that they are fully informed of their rights and
duties with regard to statistical data collection?
8. Coordination among statistical agencies
within countries is essential to achieve consistency
and efficiency in the statistical system.
In other words: Principle 8
means that in order to prevent inefficiency, undue
response burden and the compilation of incomparable
statistics, effective mechanisms for national
coordination of statistics should be in place.
Statistical coordination has
two main aspects: coordination of programs (in
particular as to data collections), coordination
of statistical concepts and consistency of statistical
methods. Coordination of programs aims at achieving
efficiency (avoiding duplication of efforts) and
at reducing the response burden (avoiding that
the same information is collected several times).
Coordination of standards (in particular definitions
and classifications) also has efficiency and response
burden effects, but is primarily aiming at compilation
of comparable statistics. In this latter respect
it is important that the national statistical
office is recognized as the 'bureau of standards',
standards which are respected and followed by
all other agencies which may be active in official
Obviously, coordination is easier
to achieve in countries which have a centralized
statistical system (such as Canada, Australia,
The Netherlands and others) than in countries
where official statistics are highly decentralized
(such as the United States, where more than 70
federal agencies are active in statistics) or
relatively decentralized (such as the United Kingdom,
France or Japan).
Nevertheless, coordination mechanisms
in countries with decentralized systems may be
well developed and successful, while coordination
in countries with a centralized system does not
always function perfectly. The question to be
asked is therefore:
How well developed are national statistical coordination
mechanisms and to what extent do they produce
the envisaged results?
9. The use by statistical agencies in each
country of international concepts, classifications
and methods promotes the consistency and efficiency
of statistical systems at all official levels.
Principle 9 basically means
that statistical offices should as much as possible
adhere to international statistical standards
and best practices, not only in order to produce
internationally comparable statistics, but also
in order to enhance efficiency of statistical
operations and the overall quality of statistics.
There are two different aspects
to international statistical coordination.
First of all, it is important
that national statistical systems follow international
definitions and classifications, in order to achieve
cross-country comparability of statistics. This
may seem simple and obvious, but poses considerable
problems in practice. International statistical
definitions and classifications are by definition
the result of a complex process of compromising.
The compromise may be such that some countries
can better live with it than others. In particular,
developing countries may have difficulties to
apply the standards fully, because the process
of developing the standards is usually dominated
by the more advanced countries. Also, some 'blocks'
of countries (e.g. the European Union) may wish
to have their own specific standards, which sometimes
are slightly different from the world (UN) standards.
Therefore, there is general international agreement
that international coordination in this respect
should be 'flexible', in the sense that countries
or groups of countries are entitled to diverge
from the world standards, as long as they ensure
that the linkage between their standards and the
world standards is straightforward and transparent.
The second aspect of international
coordination is that countries should benefit
as much as possible from methodological, organizational
and other practical developments elsewhere. This
form of coordination is aiming at improving efficiency
and enhancing the quality of statistical products
Taking both aspects in one stride,
the question to be asked with respect to this
principle would be:
24. How well does a statistical system adhere
to agreed international standards and does it
contribute to the best of its abilities to the
further development and promulgation of best statistical
International statistical cooperation
10. Bilateral and multilateral cooperation
in statistics contributes to the improvement of
official statistics in all countries.
Principle 10 means that international
cooperation is a prerequisite to enhance the overall,
worldwide quality of official statistics. Therefore,
national statistical agencies should regard it
as part of their core activities to assist other
countries to the best of their abilities.
Apart from international meetings
of statisticians, where (the improvement of) statistical
standards is discussed, there is quite a lot of
other international statistical cooperation going
on. International organizations are trying to
promote the use of standards and best practices
by issuing handbooks and guidelines in many languages.
Some of them also organize and finance technical
cooperation programs for developing countries
or countries in transition from a centrally planned
economy to a market economy. There exist a considerable
number of training institutions, in all continents,
where statisticians are trained in statistical
methods, techniques and practices. In addition,
there is much bilateral cooperation going on between
countries, sometimes financed from international
funds, sometimes from national aid programs.
The efficiency and effectiveness
of international technical cooperation in statistics,
in terms of avoiding duplication and promoting
a systematic, goal-oriented approach, is also
a topic of continuous discussion between national
statistical agencies and international organizations.
The question to be asked with
regard to this principle would be:
25. How actively is a statistical agency involved
in international technical assistance?
Obviously, the aim of this article
is not to conclude the discussion about the Fundamental
Principles of Official Statistics, but rather
to stimulate it. Although the Principles are supposed
to lay down more or less 'eternal' standards and
values and will hopefully stand for at least a
few generations, the best ways to implement them
will be much more fluid and subject to economic,
social, political and technological developments.
In 1997, the United Statistical Commission asked
an ad hoc group to draft a comprehensive document
on 'best practices in official statistics', bearing
the Principles in mind. In doing this, the group
has started a process of extensive consultations,
in order to take the largest possible variety
of practices into account.
The question has been raised,
and rightfully so, whether the approach that is
advocated in this paper, ultimately produces real
indications about which are 'good' or 'better'
statistical systems. A statistical system that
scores high on 'the indicators', it is argued,
may have a high ethical and professional standard
and may do its very best in many ways, but is
there any guarantee that it produces good, relevant,
timely statistics? The answer to that question
would probably be: no, but nevertheless the author
is convinced that there is a high positive correlation
between scoring well on 'his indicators' and being
a successful system in terms of output. However,
it should be recognized that this article is primarily
about inputs and efforts, not so much about outputs
and results. Measuring those is perhaps even more
Naturally, statisticians are
keen on measuring and quantifying. However, discussing
measuring issues is beyond the scope of this article.
And another point that may be raised is: when
answering the questions, should one have some
'absolute' norm in mind or should one rather take
a more relative point of view and look at practices
in other statistical systems? The author would
suggest that some issues are by their nature perhaps
more 'absolute' than others, but that most issues
can only be considered against the background
of the legal, economic and cultural conditions
of a given country.
Abrahamse, A. and De
Vries, W., Restructuring the Dutch CBS (Eurostat
seminar, Dublin 1993);
Alonso, W. and P. Starr,
eds., The Politics of Numbers (New York, 1987)
Als, G., Organisation
de la statistique dans les Etats membres de
la Communauté Européenne (Luxembourg,