Back to Blog Posts

Humans Behind the AI: The Chief Science Officer

Industry & Legal Education
4 Min Read
By: 
Juliette Richart Nova
Posted: 
February 28, 2024
social link
social link
social link

https://www.csdisco.com/blog/humans-behind-the-ai-the-chief-science-officer

Meet Betsy Hilliard

Chief Science Officer at Valkyrie

Scientist, mother, and human.

Q: What does Valkyrie do with AI?

Betsy Hilliard: At Valkyrie, we have a set of tools – machine learning, AI, and knowledge engineering – and we’re looking for problems that these tools can solve. Practically every company has a problem that could be solved by one of these technologies. We work with motorsports, ambulance providers, architecture firms, and even a major video game creator. These are all broad domains, but we always ask, “What can AI do for these areas?”

Related: Humans Behind the AI: Meet the AI Consultant 💻

Q:  Is AI science or technology?

Hilliard: We’re using the term “AI” more and more, but I like to think that most of the time we’re solving problems with machine learning and knowledge engineering rather than “AI.” 

AI is the buzzword that people like talking about, but the way I think about AI is that it’s actually the science of this field. The joke when I was in grad school was, “It’s AI until we solve it, then it’s just technology.”

Until computers can reliably solve a type of problem across several domains, it’s science. For example, recommendation engines were in the category of AI before we became good at them. Now, recommendation engines are commoditized: Amazon deploys them, Netflix deploys them, and social media platforms deploy them. You interact with so many recommendation engines every single day to the point where it's seen as a solved problem technology-wise. 

Whereas there are things like generative AI, which I do place in the category of AI because these models are not perfect or solved. These models cannot be called “technology” yet; when we use them we are interacting with a science experiment. That's why there are disclaimers for ChatGPT. To me, that’s where the line is. Recommendation engines? They’re kind of solved; they shouldn’t be called AI. They’re machine learning. Natural language and generative AI chatbots? They’re science. AI.

Related: Humans Behind the AI: Meet the Educator 📚

Q: What, to you, is the difference between machine learning and AI?

Hilliard: Machine learning is a subcategory of AI – what machine learning does well is help you predict what's going to happen. It doesn’t prescribe what to do with that information; it just tells you, “We think that if you show these 12 items to your customers, they're gonna buy one of them with 95% probability.”  

There’s a lot more in AI than just machine learning. The next step is making decisions about what to do with the information you get from machine learning. When people ask me to predict the future of AI there are two main aspects that I think are going to be much more important: 

One is making models that can interpret and detect causality, not just correlation. With causality, we can pinpoint which features of a product caused someone to click on an ad, for example – which is a very hard problem but it’s what everyone wants to know.

The other problem is optimization. How do you combine machine learning and optimization to make decisions, especially if you've got to make a lot of decisions over a long period?

 

Q: Did you always want to work in science?

Hilliard: I have always been interested in science. I’m from a little town in Tennessee that was part of the Manhattan Project, which really shaped my upbringing. I grew up around scientists and engineers who came from all over the world to work at the national laboratories, one of whom was my dad. He worked in operations research, which is basically the same thing I do now. I was late for school a few times because we got carried away talking about science and the projects he was working on. I went to Brandeis University, discovered computer science and programming, and fell in love with it. 

Related: Humans Behind the AI: Meet The Physicist 🧪

Q: What does responsible use of AI mean to you?

Hilliard: There's going to be more regulation around AI, which in general is a good thing. I’m concerned that the people who are making these policy decisions do not have the knowledge yet on how to regulate AI. Hopefully, they will go to experts and get that information. There does need to be more regulation than there currently is, but exactly what that looks like is an open question. A lot of fields have standards – in medicine there's the Hippocratic Oath. In engineering, bridges must be built to hold X times a certain weight. We need to be building out those standards for software engineering and artificial intelligence as well. The field is new and unregulated right now, but it affects everyone so much. 

That’s where responsible AI comes into play: making sure that when you are building models, you’re aware of the biases in your data. There is bias in all data, with different levels of harm that can happen if you don’t note that bias or try to mitigate it. You have to be aware of the decisions that are being made with machine learning or AI systems – not only of the immediate people it might harm, but also the next level. The people who are using AI are not always the ones that are going to be harmed by bias in the system. You have to take at least one step further back and see who’s affected by the model you’re building.

Q: What is your biggest hope and your biggest fear for AI?

Hilliard: I think about this question differently now that I’m a mom. I wonder, and worry, about the world my son will live in. 

My biggest hope for AI – in the way that I define AI: machine learning, optimization, causality – is that we can help people automate the tedious and inform the complex. That AI can get rid of those things that drive everybody crazy about their job and elevate people to do more interesting work. That it can give people the tools and insight to make complex decisions. I think AI is going to get better and better at that.

On the other hand, I think there's a real risk that people will use this technology to do great harm in the world. I worry about the ability of machine learning systems to isolate us to the point where we're fed information that keeps us unintentionally ignorant of what is going on outside of our current world mindset. I worry that my son will live in an even more polarized society. 

If the systems are only targeted to give you what you want right now, then you're missing so much. When you're only shown TikTok videos and news stories like the ones you've seen before, you’re not going to grow – you're going to be isolated from those around you who have different experiences.

Related: Humans Behind the AI: Meet the Gen Z Thought Leader 💡

For further reading on the future of legal technology and how AI is shaping the landscape, click here.

Meet more of the Humans Behind the AI:

- The AI Consultant

- The Educator

- The Gen Z Thought Leader

- The Physicist

- The Accidental Lawyer

The Legal Hold Playbook

A guide to implementing self-service legal hold at your organization

View more resources
Table of Contents
0%
100%