Anna Spitznagel from TrailML: Companies are asking for AI regulation
"A lot of companies were asking for regulation - we need a framework to know when we're good to go."
In this episode, Chris, Rod, and Anna Spitznagel from TrailML discuss the EU AI Act, its implications for companies, and the importance of AI governance. Anna provides an overview of the Act, explaining its risk classes and roles, and the challenges of compliance and enforcement. The conversation also touches on the balance between regulation and innovation, the categorization of AI use cases, and the emerging need for AI literacy within organizations. Additionally, they explore the implications of AI-generated content on intellectual property and the new job roles arising from the Act.
In this conversation, Anna Spitznagel discusses the importance of automating AI governance and compliance processes. She explains how her company, Trail, provides a co-pilot for AI governance, helping organizations navigate the complexities of AI regulations. The discussion covers practical examples of implementing AI solutions, customer onboarding, pricing models, and the significance of building trust with clients. Anna also shares insights into the future of AI governance, emphasizing the need for clarity in regulations and fostering trust in AI technologies.
Takeaways
The EU AI Act is the first large-scale regulation of AI globally.
AI governance is crucial to align AI technology with European values.
Companies must categorize their AI use cases into risk classes.
The EU AI Act is the first large-scale regulation of AI globally.
AI governance is crucial to align AI technology with European values.
Companies must categorize their AI use cases into risk classes.
Like what you hear? Remember to smash that subscribe button for more insights every week!
Find below👇 our:
YouTube Episode
Spotify Podcast
YouTube Episode
Spotify Podcast
Episode Transcript
Introduction and Welcome
Chris: Welcome to another episode of the Chris Rod Max Show! I'm really excited to have Rod and our special guest Anna from TrailML joining us today. Even though it's almost the end of the year, we're fortunate to have Anna in our virtual studio to discuss the EU AI Act, governance, and data security.
Rod: Hello everyone!
Chris: How are you both doing today? Rod?
Rod: I'm really happy we can close out the year with Anna, very glad to be here.
Chris: How about you, Anna?
Anna: I'm delighted to be here! This is actually my second podcast, so I'm becoming quite the pro. Looking forward to discussing responsible AI and AI governance with you both.
Understanding the EU AI Act
Chris: Before we dive into what you all do, it would be really helpful for our audience to understand exactly what the EU AI Act is. What was the idea behind it? Where do we stand now? And how are companies and stakeholders adopting it? Anna, could you help us get started?
Anna: Of course! Let me give you a quick overview of the AI Act and its most important aspects. The European AI Act is the first comprehensive AI regulation globally. While there have been regulatory attempts in different parts of the world, the AI Act is the first major regulation that's already been put into force.
The EU chose to regulate use cases rather than the technology itself, which is why they introduced risk classes and roles. These are the most important concepts of the AI Act because they determine what companies need to do when using, developing, or selling AI products.
There are four risk classes:- Prohibited AI- High-risk AI- Limited risk AI- No risk
And for roles, the most common ones I see in companies are:- Provider of an AI system- Deployer of an AI system
Timeline and Compliance
Anna: The AI Act was put into force in August this year, and the first deadline is coming up in February 2025. This initial deadline has two key components:
Prohibited AI systems must be removed from the market
Organizations must ensure AI literacy across their workforce
Regarding enforcement, the fines are substantial - up to €35 million or 7% of global revenue, whichever is higher. This was intentionally set high by the EU.
Enforcement and Implementation
Chris: That's quite comprehensive. If I can recap: we have four risk levels, two roles (provider or user), and based on these combinations, there are different requirements to follow. This year marks the first year the EU Act is in force, with specific deadlines and significant fines for non-compliance. Who actually enforces these requirements? Is there someone who checks your systems?
Anna: That's still being figured out. Member states have until August 2025 to determine who will be responsible for enforcement at the national level. In Germany, it looks like it will be the Bundesnetzagentur, but this can vary by country.
For generative AI and general-purpose AI, the EU has established a dedicated AI office at the European level. They're actively hiring, especially technical experts, as reviewing these models requires specific expertise.
Risk Categories and Examples
Chris: Could you explain the four risk categories and provide some examples? Also, tell us more about AI literacy - what does that really mean?
Anna: Let me break down the risk categories:
Prohibited AI: These are use cases the EU won't allow on the market. For example, social scoring by governments or emotional recognition in workplaces.
High-Risk AI: This typically involves human data or decisions that impact people's lives. A good example is credit scoring - AI systems that determine whether someone gets a loan. These systems are allowed but must meet strict requirements around risk management, quality management, and technical transparency.
Limited Risk AI: This mostly applies to generative AI solutions that interact with people. The focus is on transparency - ensuring people know when they're interacting with AI or when content is AI-generated.
No Risk: Simple applications like email spam filters, where the potential harm is minimal.
AI Literacy Requirements
Anna: Regarding AI literacy, it's about ensuring every employee who works with AI understands its basics. The AI office recently clarified that everyone needs a base understanding that AI isn't a magic box but uses mathematical methods. They need to understand AI's limitations and be aware of the AI Act's rules.
Companies are approaching this through various methods, often starting with webinars for basic training. The level of required knowledge varies by role - data scientists need deeper understanding than business users.
Industry Impact and New Roles
Chris: Sometimes governments and companies might not align because it's one thing to create policies and another to upskill people. Have you seen new types of jobs emerge because of the EU Act and the rise of generative AI?
Anna: Yes, we're seeing new roles emerge. Companies are creating positions like AI governance leads, which were previously luxury positions only at big companies. The question of who owns AI governance and compliance varies - sometimes it's in IT security, sometimes in compliance, and sometimes under the AI manager.
TrailML's Role
Chris: Tell us more about TrailML.
Anna: TrailML provides tools to help with AI regulation compliance. Many companies initially think they can manage with Excel, but AI's dynamic nature requires more sophisticated solutions. We built Trail as a co-pilot for AI governance, automating documentation and ensuring transparency.
We're currently a team of 10 people, and we've developed a co-pilot that guides organizations through compliance requirements. Our system integrates with code and databases to provide structured information, making governance processes more efficient.
Practical Application
Chris: Let's use an example. Say Rod and I have the Chris Rod Max Show podcast company and want to implement an AI chatbot. What happens when we come to you for compliance help?
Anna: You would start by logging into our web platform. If you're developing the chatbot yourself, you'd use our Python package to upload your code and logging information. The platform would guide you through categorizing your use case and determining your risk class and role.
In the case of a chatbot, it would likely fall under limited risk, which means fewer requirements than high-risk applications. You'd assign roles and responsibilities - perhaps one compliance lead, one AI lead, and one developer. Each person would handle different requirements and approvals.
Customer Base and Pricing
Chris: Who are your customers? Are you targeting smaller or larger companies?
Anna: In theory, everyone needs AI governance, but we currently see two main customer types:
Large enterprises who have been thinking about this for a while and need smart solutions for their complex regulatory environment.
AI-native startups who need to prove compliance to sell to larger companies.
Our pricing is use-case based - it differs significantly between a single use case and handling 100 use cases. For smaller companies, think of it in the productivity tool price range; for enterprises, it's more in line with typical compliance solution pricing.
Looking Ahead
Rod: As we approach 2025, what are your predictions for next year? Where do you see the industry going?
Anna: I think we're moving from experimentation to scaling AI solutions and ensuring business value. On the regulatory side, we'll see more clarity through standards, guidelines, and best practices. For general AI, I expect continued innovation, fewer hallucinations, and better quality models.
Final Thoughts
Chris: Anna, for any listeners wanting to get in touch with you, how can they do that?
Anna: You can check out the TrailML website, book a demo, or reach out to me directly on LinkedIn. Happy to chat about all these topics!
Chris: Thank you for sharing your knowledge about the EU AI Act. I'm sure our listeners learned a lot today.
Rod: Don't forget to subscribe to our newsletter on ChrisRodMax.com!
Chris: Thank you, Rod.
Anna: Have a great day!