Meet Swati Popuri
Data Privacy Architect at Allegis Group
Responsible AI enthusiast, ayurvedic chef, and human.
Q: As a data privacy architect, what concerns do you have about AI?
Swati Popuri: As a Data Privacy Architect, my number one concern is the privacy of peopIe like you and me whose data is being used to train the models without our consent. However, I feel optimistic that we all can benefit from AI given the right frameworks and privacy-enhancing technologies are in place.
I started focusing my career more towards the privacy and protection aspects of data management in 2017, right before the European General Data Protection Regulation (GDPR) went into effect. This was one of the first revolutionary data privacy regulations, and it was such a “hot topic” at the time.
Since then, every organization has to keep a record of processing activity reports which consists of the following four elements when collecting data:
- What is the personal data?
- Why was it collected?
- Who is using it?
- Where in your ecosystem is it located?
This data map gives us a lineage of how personal data flows through various systems, and if you have this well-defined data pipeline, then your AI governance framework can build off of it. When thinking about AI governance, an organization doesn’t have to create an entirely new framework. It can be an extension of your existing data infrastructure and governance processes.
For example, when an organization needs to communicate their privacy policy to their customers they have to inform them if the data being collected will be used for training AI models.
This in turn opens up the issue of how to deal with a data subject access request where a customer asks that we forget their data. If we have already trained an AI model on their data, how do we implement that request? Retraining the model is an expensive task, and there are currently a number of considerations and challenges around this. Data privacy and AI development go hand in hand.
Q: How can organizations deploy AI while keeping data privacy in mind?
Popuri: I often refer to the OECD AI principles when posed with this question. The diagram below is in an ideation phase and builds off of those responsible and ethical AI principles. On the left side of the model below, you can see an existing framework for a data map or flow, which most organizations will already have.
A well-functioning privacy program will also assign risk scores to certain types of data. For example, a social security number would typically be classified as extremely sensitive; an email address, though still personally identifiable information, would be considered less sensitive. Having this taxonomy of how you as an organization define your risk categories is one of the foundations of a data governance program.
The new part of this, which can feel very novel and overwhelming right now, is AI model governance, which is depicted on the right side of this graph. Any organization that is looking to deploy responsible AI practices has to consider factors like fairness and bias detection. These safety and security measures have to be implemented from the beginning and continuously built upon.
There are currently a lot of debates about how to leverage privacy-enhancing technologies in the development of AI, including ideas like training the models on synthetic data, or anonymizing customer data so that an individual cannot be identified from a data set. I feel confident that there is a path forward for respecting privacy while fostering AI innovation.
Related: Humans Behind the AI: The AI Consultant 💻
Q: When discussing bias detection, what does “minimizing harm” look like in practice?
Popuri: From the perspective of an algorithm developer, the main goal should always be to minimize the potential harm caused by AI. There are existing ethical frameworks and recommendations from various jurisdictions that help inform this, such as the National Institute of Standards and Technology in the U.S.
If my goal is to make my model in such a way that I reduce bias and ensure fairness, then I first have to think about how I define fairness. According to Princeton Professor Arvind Narayanan there are 21 definitions of fairness, so this step alone can get complicated.
For our purposes, I’ll use a simplistic definition of fairness also known as anti-classification. If two individuals X and Y have the same unprotected attributes then the decision taken by the algorithm should be the same irrespective of their protected attributes. Certain attributes such as age, race, gender, ethnicity etc. are protected according to the law and should never be used to discriminate against an individual under certain circumstances.
In the context of an employment decision, for example, these attributes should have no say. If I was building a model to help make hiring decisions between two people and both of them have the same unprotected attributes like experience, skills, etc., then the model should evaluate the two candidates equally.
Another way to minimize harm is by having a well representative diverse training dataset when training the model.
Playing with these mental frameworks when developing AI helps reduce bias in the model, so that’s what I think of when people say “I use responsible AI practices.”
Related: Humans Behind the AI: The Chief Science Officer 🧪
Q: What is your biggest hope and your biggest fear for AI?
Popuri: I’ll start with the fear: that as AI continues to rapidly advance, we’ll gain the ability to customize our own personalized AI assistants and chatbots, to the extent where it’s almost like building a virtual friend. We as humans need to be in connection with each other, but we are heading towards an existential crisis of sorts where connecting with a virtual AI assistant could feel easier than connecting with another person. This is an observation I’ve made based off what happened when the world was taken over by smartphones and social media. We began inventing our identities on these platforms and losing sight of what’s real and what isn’t.
It’s a little doom-and-gloom as I’m afraid that AI will cause us to lose that element of connection. We need other humans to feel our joy and sorrow with us, and we can’t and shouldn’t expect this technology to replace a good friend.
On the positive side of things, I am optimistic about the tremendous improvements in productivity and automating mundane tasks that AI will grant us. From summarizing really long articles to generating images and videos, AI will give us so much time back. Hopefully people will use that newfound free time to connect with real people.
For further reading on the future of legal technology and how AI is shaping the landscape, click here.
Meet more Humans Behind the AI: