Humans Behind the AI: The Strategist

Back to Blog Posts

Meet Keatra Nesbitt

Principal Solutions Engineer at Valkyrie

Responsible AI strategist, former teacher, and human.

Q: What does responsible AI mean to you?

Keatra Nesbitt: A lot happened in 2020 – it felt like the more that I saw harmful technology put out into society, the more I wanted to avoid being one of the companies contributing to the problem. I remember thinking, “How can we evaluate our models to ensure that the outputs are not biased towards certain communities over others?” Others on the team had similar concerns, so we all got together to start creating Responsible AI (RAI) practices at Valkyrie. 

The task force we created – the Responsible AI Task Force – started by educating ourselves on RAI practices, and researching tools that could evaluate bias in data, model outputs, and any other privacy or security red flags. 

Everyone has a different definition of ethics, but at Valkyrie we hold everything up to our values of grit, love, honor, hope, and curiosity. Whenever we’re considering clients or designing a data solution, we evaluate against those five values to ensure that our processes and solutions are causing the least harm; that’s what RAI means to me. 

Q:  How do you translate this across the projects Valkyrie works on?

Nesbitt: In 2021, Valkyrie built a model to forecast ambulance transport demand. In developing this model, I created a pilot program where we would go out to the different dispatch units and speak to employees to witness their day-to-day. We wanted to really understand their operations so that the model we implemented actually helped their workflow. Before teaching them about AI and what a Machine Learning (ML) model could do, I spent time observing the dispatch call center, touring the facilities, and learning how managers made decisions.

That learning and understanding process is the first part of responsibly implementing AI, and that’s why I was originally interested in strategy. I knew that sitting behind a computer and coding all day would not do it for me. I knew that I liked client engagement – talking to and teaching people. Being able to teach our clients about ML and explain these really technical concepts in a simple manner brings me the same joy I had when I was teaching math to high schoolers. It’s about being able to take something that seems really complex and breaking it down until you see someone have that lightbulb moment. 

When I feel like I'm leaving my clients more aware and educated of what AI is, it’s immensely rewarding. It’s important for clients to understand that AI is not some black box of magic that can’t be understood; rather, it’s a collaborative solution that requires user input.

"It’s important for clients to understand that AI is not some black box of magic that can’t be understood; rather, it’s a collaborative solution that requires user input.
Keatra Nesbitt, Principal Solutions Engineer, Valkyrie

Related: Humans Behind the AI: The AI Consultant 💻

Q: Did you always want to work in data science?

I have always enjoyed math but when I entered college, I thought the only thing you could do with a math degree was become a teacher, so I studied Math with a focus on Secondary Education. After a year and a half, I spoke with my advisor and decided to change tracks to Applied Mathematics. This was my first introduction to coding, data modeling, and statistics.

Unlike my secondary education courses, I was often the only woman in my applied math and coding classes. But rather than that discouraging me, it actually motivated me to keep going. Ironically enough, I ended up teaching ninth-grade algebra a few years after graduation, and while I found it really fulfilling I was always drawn back to data science. I attended a data science boot camp to brush up on my coding skills and that’s where I met Betsy Hilliard, Valkyrie’s Chief Science Officer, and the rest is history.

Related: Humans Behind the AI: The Chief Science Officer 🧪

Q: How did teaching ninth grade math impact your role at Valkyrie?

Nesbitt: My favorite part of my job is connecting business goals to AI solutions, which oftentimes feels a lot like learning and teaching. Building trust with our clients is essential for increasing their comfort level with AI, and our approach always begins with acknowledging that our clients are the experts in their respective industries. They know more about cruise line operations, emergency transport services, and music subscriptions, to name a few, than we do. 

When we first meet with a client we always position ourselves as listeners and learners of their industry, their struggles, and their concerns. They are more open when we can go back and position a solution as a direct response to one of the challenges they shared with us. This allows people to feel heard and realize that we are not just forcing an AI solution on them, but that we took the time to truly understand their problems and their industry before coming up with a recommendation. We continuously ask ourselves, “What are the best data-based solutions for our clients?”

Related: Humans Behind the AI: The Educator 📚

Q: What is your biggest hope and your biggest fear for AI?

Nesbitt: My biggest hope is that AI starts to bridge and reduce gaps: political gaps, socioeconomic gaps, racial gaps. There is such a disconnect in our society – it sometimes feels like we are purposefully pitting people against each other. I hope that the technology that is created encourages and fosters communication, collaboration, and understanding. Skills like empathy, for example, are hard to teach. How can we use AI to improve some of these soft skills that make us better humans?

My biggest fear is just the opposite: that AI will cause mass harm before regulation comes in. That it will continue to separate us and increase gaps to the point where there are going to be a select few who have access to knowledge through AI, and those who do not will be negatively impacted. Certain regulations should be set before an AI solution reaches the masses in a really short amount of time, just like there are guardrails to releasing a new drug or medication. Taking the time to check that the AI technology is producing no harm is the least we can do.

For further reading on the future of legal technology and how AI is shaping the landscape, click here.

Meet more Humans Behind the AI:

- The AI Consultant

- The Educator

- The Gen Z Thought Leader

- The Physicist

- The Accidental Lawyer

- The Chief Science Officer

- The Ideator

About Valkyrie

Valkyrie is an applied science firm that builds industry-defining AI and ML models through our services, product, impact, and investment arms. We specialize in taking the latest advancements in AI research and applying it to solve our client’s challenges through model development or product deployment. Our work can be found across industry from SiriusXM, Activision, Chubb Insurance, and the Department of Defense, to name a few. For more information, visit www.valkyrie.ai. 

About Valkyrie Intelligence

Founded in Austin, Texas in 2017, Valkyrie Intelligence is a services company that applies science to data to solve challenges, drive impact and create value. Our team of expertly trained scientists and strategists deliver custom machine learning capabilities to solve complex challenges and drive organizational impact. By finding the hidden insights within data, our team creates solutions that allow our clients to act on empirically backed intelligence. When we’re not obsessively building solutions for our clients, we’re advancing our field of science through research, pro-bono work and IP development.

Subscribe to the blog
Juliette Richart Nova
0%
100%