With international media coverage of the controversies surrounding Facebook and Cambridge Analytica, many more people are realising that our data is often being collected and used in ways we may not be immediately aware of. Not just in profiles that we choose to set up, but also information that may be given away by our friends via choices that they make and our connections to them. As we live our lives in increasingly connected spaces, it is more difficult to be data invisible. This is very relevant to TrustLens, as we extend these questions of data privacy and risk ‘real world’ physical IoT sensors, devices and systems.
That these conversations around data management and the abuse of public trust are coming at this moment in time, shortly before the EU GDPR comes into effect in May, is interesting and coincides with wider discussions taking place by policymakers and others. Questions of data governance again extend to connected sensors in our physical spaces, particularly if they are being installed in the name of ‘public good’.
Organisations such as the newly formed Open Privacy research society are intent on developing trustworthy systems that enable informed consent by users. Ideally, consent should be an ongoing conversation with a platform that demonstrates transparency in its actions. True informed consent is unlikely to be obtained simply through the use of lengthy privacy policies, which seldom get read let alone understood. Equally, not all users have the choice to disconnect from systems which may be increasingly fundamental to life in the modern world and have a social cost to give up, particularly for marginalised groups. We suggest that transparency is a key aspect for being able to properly understand and assess risk, a first step to give control and agency to those impacted by digital systems.