The ASSERT approach to Societal impact of assessment of security research draws upon existing approaches to societal impact assessment, including Social Impact Assessment, Constructive Technology Assessment and Privacy Impact Assessment. This page provides some information on the history of these fields. This material is drawn primarily from deliverable 1.2 of the ASSERT project, Report on methodologies relevant to the assessment of societal impacts of security research (available here).
Social Impact Assessment
Social impact assessment (SIA) originates in the 1970s, when US Federal agencies were prompted to utilise insights from the social sciences – both conceptually and methodologically – when deciding on plans and projects that may have an impact on the environment (see Freudenburg 1986). From the beginning, SIA was a ‘soft’ requirement in the sense that it was something that should be done, without there being any tangible legal or political sanctions if negative effects were foreseen. This has remained one of the core points of criticism of SIA also when it broadened its scope, from a focus on environmental impacts to social impacts more widely (see also the section on environmental impact assessment below), and when it became a mandatory step in planning certain kinds of projects (not only by governmental agencies, but also by businesses and by individual parties in specific cases).
In 1983 social theorist Raymond Williams described SIA as ‘an area where several disciplines converge, but in general do not meet’ (Williams 1983: 15). Also today, 30 years later, the literature and practice of SIA is influenced by contributions and insights from various disciplines and practical contexts, without sharing a common conceptual or methodological outlook. Besides the clear disadvantages of such a situation (difficulties of implementing common ‘best practice’ standards, or facilitating discussions about improving SIA methodologies when understandings of the main objectives of SIA vary), this also has advantages. SIA is carried out in a variety of different contexts: research, technology development and deployment, policy-making, implementation, evaluation, etc. These contexts have different needs and requirements. A conceptual and methodological toolbox whose elements can be customised to suit particular contexts, rather than focusing on establishing or maintaining internal coherence at the cost of applicability to a wide range of contexts, clearly has some benefits.
Groningen-based geographer and sociologist Frank Vanclay (2003), one of the ‘fathers’ of SIA, summarised that the term can signify any or all of the following entities: it can refer to a technique, a methodology, and/or a paradigm or sub-discipline of applied social science (see also Howitt 2011: 78-79). The literature on SIA addresses SIA in any or all of these manifestations, often without spelling out explicitly which one a particular publication or approach seeks to contribute to.
Today, one of the most influential definitions of SIA in the literature is Frank Vanclay’s understanding that SIA includes ‘the processes of analysing, monitoring and managing the intended and unintended consequences, both positive and negative, of planned interventions (policies, programmes, plans, projects) and any social change processes invoked by those interventions’ (2003: 6; see also Vanclay 2006). Drawing upon this understanding, Vanclay and Esteves (2011), in their introduction to their edited volume on new directions in SIA, propose that SIA should focus on:
managing the social issues associated with planned interventions; taking a holistic and integrated approach […]; and stressing that greater attention needs to be given to ensuring that the goals of development (project benefit) are attained and enhanced. In this current understanding, SIA is much more than the act of predicting impacts in a regulatory context (the old traditional view); it is the process of managing the social aspects of development. (cf. Vanclay & Esteves 2011: 3)
Constructive Technology Assessment
Technology assessment (TA) in general aims to forecast the likely implications of technologies on various other systems in order to avoid or to learn about possible harmful impacts.
Biegelbauer and Loeber (2010: 17-18) summarise different attempts at classifying different types of TA: Smits and Leyten (1991), for example, distinguished between (1) ‘Awareness TA’ (ATA), which focuses on long-term technological potentials, developments and the creation of awareness concerning the societal choices, (2) ‘Strategic TA’ (STA), which is sector- or problem-specific and has a medium time horizon), and (3) ‘Constructive TA’ (CTA) with its focus on short-term design and construction stages of the innovation process.
An earlier typology by Bechmann (1993) differentiates between (1) instrumental models of TA aiming at increasing the effectiveness of political and administrative procedures, (2) the ‘Elite Model’ of TA, requiring the participation of highly qualified experts, and (3) a ‘Democratic Model’, in which ‘the general public’ plays a significant role. As Biegelbauer and Loeber note, it is the third type of TA that is now commonly referred to as ‘Participatory TA’.
As apparent in Smits and Leyten’s (1991) classification, CTA is a particular approach to technology assessment. It ‘experiments’ with a technology in society, rather than in the laboratory (see Genus 2006: 19), emphasising
(1) the need for reflexivity and participation of a wide range of actors in the technology assessment activities (e.g. Genus 2006);
(2) ‘alignment’, which focuses on how interactions among participants foster and facilitate social and policy learning (to enhance understanding of future possible impacts of technology; see Genus 2006) (Schot & Rip 1996). The modulation of processes should be seen as something ongoing, not as a one-off event;
(3) integration of anticipation of the future effects of technology into the promotion and introduction of technology (i.e. actors involved in control activities should actively participate in the technology design and development activities (Genus 2006: Rip et al. 1995; Rip & Schot 2002; Rip 2001; Schot 2001; Schot & Rip 1996);
One of the founders of CTA, Arie Rip (2001a), suggested that the main value in this form of technology assessment did not lie in ‘accurately’ predicting all impacts, but in the dialogue and interaction between those participating in the assessment process. Indeed, such dialogue and interaction enhances the mutual? understanding of each actor’s needs and objectives, and is likely to improve problem-solving capacities even if negative impacts occur. This approach is characteristic for CTA. Moreover, CTA clearly sets itself apart from the ‘deficit model’ of public engagement, where publics are seen in need to be ‘educated’ about science in order to appropriately understand, and ultimately accept, new science and technology. CTA’s approach to public engagement is participatory and dialogical. In other words, the emphasis of CTA does not lie in predicting impacts and facilitating technological development and deployment, but it lies in understanding[FF1] them[LO2] (as well as the dynamics which facilitate or impede their occurrence). As Rip & Schot (2002: 157) put it, ‘understanding the dynamics of development allows one to identify opportunities of intervention, and specify how such interventions can be productive. This will not resolve all complexities of anticipation and intervention, but will go some way to mitigate them.’
For the innovation and technology management expert Audley Genus (2006: 13), CTA aims ‘to produce better technology in a better society, and emphasis[ing] the early involvement of a broad array of actors to facilitate social learning about technology and potential impacts’. It presents
a particularly promising approach to technology assessment that builds on earlier insights gained from research concerned with the impacts of technology on society, the social shaping of technology, and incremental decision-making, and attempts to find ways to improve the social robustness of technology. (Genus 2006: 14)
Privacy and Surveillance Impact Assessment
So-called Privacy Impact Assessments (PIAs) are rooted in concerns about data security in connection with computer development in the 1970s. They have become increasingly common in the Anglosaxon world in the mid-1990s (Clarke 2011).It is indeed important to note that the development of PIA was pioneered in countries including New Zealand, Canada, and Australia, USA, UK, because governance approaches in these countries rely much more on self-regulatory instruments (see for the origins and development of the concept Clarke 2009). Surveillance impact assessments, as Wright and Raab (2012: 613) note, did not show up on the Social Science Research Network (SSRN) at all in 2012. The approach was founded by these authors, one of whom is part of the ASSERT consortium (David Wright).
PIAs are designed to assess proposed technologies or practices with regard to the risks that they pose for infringements of privacy. Wright et al. (2013: 163) defined PIA as ‘a methodology for assessing the impacts on privacy of a project, technology, product, service, policy, programme or other initiative and, in consultation with stakeholders, for taking remedial actions as necessary in order to avoid or minimise negative impacts.’ PIAs should thus be treated as procedures accompanying the development and implementation of the operational product, rather than as a one-off event.
The importance of PIA has also been acknowledged by the proposed European General Data Protection Regulation (GDPR): Already in 2009, the Commission had issued a Recommendation stating that ‘Member States should ensure that industry, in collaboration with relevant civil society stakeholders, develops a framework for privacy and data protection impact assessments’ (European Commission 2009). This framework should be submitted for endorsement to the Article 29 Data Protection Working Party. That Working Party rejected the first submission from the side of the industry but it did endorse a revised document in 2011 (European Commission 2011), highlighting in particular the importance of wide stake holder engagement, transparency, dialogue, and ‘privacy by design’ (see also Wright et al. 2013: 161) Moreover, Article 33 of the proposed Regulation determines the requirement for data controllers and their patrons to perform a PIA before potentially infringing data processing operations. It shall encompass a description of planned operations, an evaluation of risks that might ensue for specific people and groups with regard to their freedoms and rights, as well as envisioned protection and control measures. A finalised PIA must be handed in to the respective supervisory office.
Surveillance impact assessment
The question arises whether PIA instruments are sufficient to close the inconsistencies between law and practice. The aforementioned focus of discourses on ethical and social implications of surveillance has been criticised as unduly narrow. In the words of David Lyon, the risks ‘go well beyond anything that quests for “privacy” or “data protection” can cope with on their own’ (Lyon 2003: 2). Concerns pertaining to psychological harms, or to the availability of countermeasures against surveillance have not received sufficient attention in the PIA literature, nor in the operationalization of PIA methodologies. There is thus an urgent need to operationalise such concerns, including the question of how privacy is socially distributed among different social groups.
Building on Marx’ approach, Charles Raab has picked up on the idea of actively ‘questioning surveillance’ in a Report on the Surveillance Society, prepared by the Surveillance Studies Network for the UK Information Commissioner’s Office in 2006. This Report coined the term surveillance impact assessment. While it ‘may include, but also transcend, privacy itself’, surveillance impact assessment is first of all grounded in the observation that ‘many surveillance practices have a direct effect on the nature of the society in which they are embedded, in terms of categorical discrimination (or empowerment), social exclusion, and other outcomes that would still be causes of concern even if the invasion of individual privacy were not in question’ (Surveillance Studies Network 2006: 93). In two further papers David Wright and Charles Raab argued that a surveillance impact assessment shall follow the process of a PIA, however the authors point out principle differences between both: first of all, surveillance impact assessment extends the normative perspective addressing also other ‘social, economic, financial, political, legal, ethical and psychological’ issues and impacts. Moreover, its focus is not the individual and related societal effects of privacy infringement, but ‘social groups or society as a whole’. Finally, due to this extended perspective, surveillance impact assessment must ‘consult and engage with a wider range of stakeholders than a PIA’ (Wright & Raab 2012: 615).