The facebook Cambridge Analytica scandal raises some important questions about the use and security of user’s data, and the operating practices of such companies. The scandal is not so-much that there had been a “breach” but rather that users data had been shared as part of facebook’s business model. It is a model that relies value provided by the data that facebook’s users share. This data, it turns out, is not only of value to advertisers, but as it turns out, political analysts and campaign consultants.
Arguably, this situation has developed because facebook is a form of what’s been termed ‘platform capitalism’. [See Nick Srnicek’s Platform Capitalism for more information]
‘With a long decline in manufacturing profitability, capitalism has turned to data as one way to maintain economic growth and vitality in the face of a struggling production sector.’ (p6)
Platforms as data engines
This is where platforms come in. If data has become a massive new raw material for capitalism, then platforms are the engines that allow it to process this data
Given that such platforms need data as the basis of their product that they use to turn into a profit, techniques developed at places such as Stanford University’s Persuasive Technology Lab are used to the develop and refine these platforms so that people enjoy sharing their data, photos, comments, ‘likes’, etc and receiving positive feedback. This “quantifiable social endorsement” (Sherman, et al, 2018) reinforces data-sharing behaviour, and increases the likelihood that people will continue to provide data to such platforms.
Like all many new technological developments, ethics seems to be the caboose on the end of the train, never able to get ahead and steer the developments powered by new scientific and technological discovery and exploration. While there have been increasing discussions about the ethics of platform capitalism’s products, regulations and norms haven’t yet caught up with the ethical issues engendered by these new business models.
Facebook and Instagram (Facebook owns Instagram) have been accused of emotionally manipulating users to encourage them to keep keep checking their feeds. In Instagram’s case by withholding users ‘likes’. Twitter and Facebook’s algorithmically determined feeds highlight emotional/inflammatory content, and create bubbles as people aren’t in a shared online space, but rather given their own individual feed based on what they have previously liked and interacted with. This latter feature has been blamed for the advent of ‘post-truth’ culture, echo chambers, and emotion-driven civic discourse that has seen the rise of populism in politics. Such practices have led to calls for regulatory controls that limit how users data can be used, breaking up some of the big platforms into smaller businesses, making privacy easier for users to control, the development of simpler terms and conditions for these products so that consumers understand what they are agreeing to.
It could be argued that many of the issues raised by the use of such platforms need to be carefully considered for education. For like, society at large, ethical considerations often come after new products have been installed and used and the problematic aspects become apparent.
Education and EdTech
Education is filled with discourses that suggest that its problems can be solved with technological solutions. Teachers’ and administrators’ work can be made easier with the right software packages, computer based adaptive testing will be more accurate and less flawed than paper based standardised tests, the right tech products will engage students in their learning and help prepare them for work in ‘the real world’, etc, etc, ad nauseam.
In Australia, post the Digital Education Revolution, secondary school students are expected to have laptops, and increasingly even primary schools are introducing 1 to 1 laptop programs for the upper years students. With computer usage a normal part of teaching and learning, questions are raised about what products are best to use in the classroom and which allow seamless connection between home and school? The ubiquity of technology in schools has gradually allowed corporations more power in education. Software and hardware providers, companies that provide curricula materials, administration packages, testing packages, and behavioural monitoring products, all spruik their wares, as schools not only represent a big business opportunity but the chance to make lifetime customers out of students.
Given that school leaders are mediators of policy, this role now involves mediating the involvement of corporate interests in schools. There are ethical tensions here.
One of the highest priorities of school systems is protecting children and the internet is perceived as being a source of harm for children. However, in corralling the web and only allowing students access to particular sites, are schools able to give students the skills to negotiate the web safety and confidently? In some jurisdictions the software used to block the web is made by the corporation that monitors and blocks the internet in China and North Korea. Is that where taxpayers’ money should go?
Keen teachers are often early adopters of technology and their actions can be in tension with their departments. I was told that some time ago teachers were unofficially using products such as Google Docs, as the department was not endorsing these. However, behind the scenes, after a long period of negotiation with Google to ensure that students data would be protected, the department officially allowed teachers to access these products. The enthusiastic teachers were trying to help their students learn and could see the advantage of such products, while unaware of the Department’s responsibilities in terms of data management and security.
Products such as ClassDojo are popular with teachers, who may not think about the uses of the data being collected and the way that this program normalises some, and pathologises other, student behaviours. Like facebook, with its likes, ClassDojo strives to modify student behaviour through quantifiable social endorsement.
New technologies offer fantastic opportunities for learning, and there is a rush to implement the newest and best in order to ensure that students are not disadvantaged by missing out. However, the corporations pushing particular products are not always doing so for educational reasons.
For example, huge advances had been made in VR. With high end products coming down in price there has been a push to put VR in schools. However, what do we know about the effects of putting children in highly immersive VR? Many people are unaware that the manufacturers do not recommend putting those younger that 14 in highly immersive VR equipment as the headsets have been built for adult sizes, not to mention that young children have trouble discerning what is real and what is not.
Although I have used technology examples here (reflecting my area of research) these examples demonstrate the importance of not only good policy to take into account the needs of all stakeholders, but also how school leaders need to mediate policy (and the current policy push for technology in schools) so that ethical implications are considered.
References:
Sherman, L.E., Greenfield, P.M., Hernandez, L.M. & Dapretto, M. (2018). Peer influence via Instagram: Effects on brain and behaviour in adolescence and young adulthood. Child Development, 89, 37-47. DOI:10.1111/cdev.12838
Srnicek, N. (2016). Platform Capitalism. Polity Press: Cambridge.