According to the WEF Global Information Technology Report 2007-2008, India ranks 50 out of 127 countries analysed and ahead of it are countries like Barbados, Latvia, Tunisia, Thailand and the Slovak Republic. Similarly, India's per capita public sector IT spend stands at $1.29, compared to $199 in New Zealand and $153 in Singapore, for instance. A recent survey was also conducted by government of India for measuring the impact of key projects at central and state government levels under National e-Governance Program (NeGP), the results of which are being published on the website of Ministry of Information Technology for comments by the stakeholders.
It is important to point out here that most of the surveys/researches done are more of quantitative nature than the qualitative nature and the comparison is made between countries of hugely diverse size of population and different governance structures. However, the evidence points out that when it comes to measurement of impact of ICT interventions for public services delivery, the qualitative analysis is more valuable and requires extensive knowledge of grassroots realities compared to the more formal approach of quantitative analysis. One needs to deeply understand the audience for the initiative and their aspirations for a qualitative analysis. It is now accepted by many researchers/ advocacy specialists that the analysis of real life situations through community engagements, workshops, multi-stakeholder consultations and their documentation (in print as well as multimedia formats) have more influence on the supplier of the services as well as the consumers of the services. It is in this light that the current article analyses the nuances of qualitative and quantitative impact and tries to provide the stakeholder’s views on the recent impact assessment study carried out by Department of Information Technology, Government of India.
India has been treading the information and communication technologies (ICT) highway for well over three decades now. What began as a process that simply meant computerisation of back-end operations in government offices is today poised to reach out to the unreached, taking the government to the people.
E-governance essentially is about combining information and communication technologies (ICT) with organisational change and new skills in order to improve public services, democratic processes and public policies resulting in cost-effective public service delivery, better productivity along with greater transparency and accountability. In fact, it was the need for taking a holistic view of the entire process, that the government in 2003 formulated the National e-Governance Plan (NeGP). The NeGP seeks to adopt a programme approach for speeding up e-governance initiatives across the various arms of the government at the national, state and local government levels.
So what is the movement on the ground? If the impact assessment study of mature national projects that have been implemented in India is any indication, only one of the three projects has had a positive impact on users. The study, conducted by the Department of Information Technology (DIT), reviewed three projects: collection and processing of Income Tax, registration of new companies (MCA21), and issue of passports. According to the study, “the passport project has had virtually no impact and the results of the income tax survey indicate that whereas corporate users had benefited on some aspects, individual filers had not benefited significantly.”
In the context of the impact assessment study carried out by DIT, it will not be an exaggeration to point out that the findings of the report are influenced by the inherent shortcomings of the quantitative analysis of treating the projects with diverse nature, scale and audience through the common yardsticks for evidence gathering. Additionally, the initiatives of deploying ICTs for delivery of public services are relatively new phenomenon not fully backed by the process and organisational changes that the deployment of such initiatives warrant. In this light, it will be pertinent to be aware of some of the basics that need attention before delivering judgments on any of the initiatives under National e-Governance Plan (NeGP). Some of the facts that need to be considered are:
Given the fact that NeGP as a national program was approved by the cabinet only in the year 2006, not enough time has elapsed to allow for reliable judgments to be made on the impact of these programs and the opportunities offered by the same initiatives once the delivery infrastructure and process changes are put in place.
The impact assessment undertaken by multitude of agencies is based on short-term evaluation of pilot projects, and is focused narrowly on particular aspects as per the quantitatively oriented questionnaire provided to target beneficiaries. There has been little assessment of the holistic qualitative experience or documentation of real life stories of outcomes/impact at a societal level.
The assessment methodology has been based on evidence gathering for comparison of diverse projects where one side we have projects with fully automated systems backed by new process models (MCA21), and on the other hand we have projects where service delivery still utilises a mix of manual processes and ICT enablement of parts of the processes(Passports MMP). This has led to a concentration on comparing different inputs with the similar expectation of outputs.
Efforts need to be accelerated to combine the methodologies of qualitative as well as quantitative techniques to collect the evidence available from different stakeholders and end users of the projects make a cohesive assessment actual impact of the projects.
Another important factor that has influenced the evaluations and surveys so far is that the advocates of e-Governance from within the industry, civil society organisations and media have been overenthusiastic, unresponsive to principles of inclusive and sustainable development, focused on short term goals, generally ignorant about the challenges posed in deploying new media technologies in poor communities, and insufficiently concerned by the limitations of the evidence base available.
Government agencies largely work based upon the precedence and as pointed earlier, not enough time has lapsed to gather enough quantitative evidence on the impact of e-Governance programs. In this scenario, focus on qualitative studies is warranted. The evidence base with the agencies who have been engaged in qualitative impact assessment has improved as more time has elapsed, thus enabling them to be more effective partners for carrying out such studies. Additionally, we also need to make a distinction between projects that are designed to affect the enabling environment for delivery of public services (for example SWAN, SDC and back end automation etc.) on one hand, and initiatives with direct instrumental purpose (for example e-district, PDS, railway reservation etc.) on the other. The completion of roll-out of first type of projects will significantly enhance the reach and impact of second types of projects. It would be better to acknowledge this duality and build the evidence base for impact assessment in full recognition of its significance.
Similarly, we need to clearly understand the difference between evaluations (which are evidence based quantitative analysis of the outcomes of projects against intended objectives and are generally considered the part of the project itself) and impact assessments (that constitute of much broader qualitative analysis of the consequences of any initiative, including unexpected impacts which might have not been even envisaged at the time of conceptualisation of the initiative). The qualitative impact assessments are more reliable instruments for studying lasting and sustainable change in the life of citizens including the investigation of what changed, how that change was actuated and the extent to which the change can be attributed to the particular intervention.
In the light of above arguments, the study conducted by the DIT can best be categorised as the evaluation study with the shortcoming of applying the same yardstick for all the projects whereas the intended outcome of each project analysed may be different as per the priorities of the department concerned thus making the comparison unrealistic. Our input is that the department will need to focus on providing due importance to the qualitative analysis if the study really need to investigate impact assessment.
comments powered by Disqus