Women in AI: Brandie Nonnecke of UC Berkeley says investors should insist on responsible AI practices

7 Min Read

To present AI-focused girls lecturers and others their well-deserved — and overdue — time within the highlight, TechCrunch is launching a sequence of interviews specializing in outstanding girls who’ve contributed to the AI revolution. We’ll publish a number of items all year long because the AI increase continues, highlighting key work that always goes unrecognized. Learn extra profiles right here.

Brandie Nonnecke is the founding director of the CITRIS Coverage Lab, headquartered at UC Berkeley, which helps interdisciplinary analysis to handle questions across the position of regulation in selling innovation. Nonnecke additionally co-directors the Berkeley Middle for Legislation and Expertise, the place she leads initiatives on AI, platforms and society, and the UC Berkeley AI Coverage Hub, an initiative to coach researchers to develop efficient AI governance and coverage frameworks.

In her spare time, Nonnecke hosts a video and podcast sequence, TecHype, that analyzes rising tech insurance policies, laws and legal guidelines, offering insights into the advantages and dangers and figuring out methods to harness tech for good.

Q&A

Briefly, how did you get your begin in AI? What attracted you to the sector?

I’ve been working in accountable AI governance for almost a decade. My coaching in expertise, public coverage and their intersection with societal impacts drew me into the sector. AI is already pervasive and profoundly impactful in our lives — for higher and for worse. It’s vital to me to meaningfully contribute to society’s capacity to harness this expertise for good relatively than stand on the sidelines.

What work are you most happy with (within the AI discipline)?

See also  X's Grok chatbot will soon get an upgraded model, Grok-1.5

I’m actually happy with two issues we’ve achieved. First, The College of California was the primary college to ascertain accountable AI rules and a governance construction to raised guarantee accountable procurement and use of AI. We take our dedication to serve the general public in a accountable method critically. I had the respect of co-chairing the UC Presidential Working Group on AI and its subsequent everlasting AI Council. In these roles, I’ve been in a position to achieve firsthand expertise considering by way of the best way to greatest operationalize our accountable AI rules with a purpose to safeguard our school, employees, college students, and the broader communities we serve. Second, I believe it’s important that the general public perceive rising applied sciences and their actual advantages and dangers. We launched TecHype, a video and podcast sequence that demystifies rising applied sciences and gives steerage on efficient technical and coverage interventions.

How do you navigate the challenges of the male-dominated tech business, and, by extension, the male-dominated AI business?

Be curious, persistent and undeterred by imposter syndrome. I’ve discovered it essential to hunt out mentors who help range and inclusion, and to supply the identical help to others getting into the sector. Constructing inclusive communities in tech has been a robust strategy to share experiences, recommendation and encouragement.

What recommendation would you give to girls looking for to enter the AI discipline?

For girls getting into the AI discipline, my recommendation is threefold: Search data relentlessly, as AI is a quickly evolving discipline. Embrace networking, as connections will open doorways to alternatives and supply invaluable help. And advocate for your self and others, as your voice is crucial in shaping an inclusive, equitable future for AI. Bear in mind, your distinctive views and experiences enrich the sector and drive innovation.

See also  Women in AI: Tara Chklovski is teaching the next generation of AI innovators

What are a number of the most urgent points going through AI because it evolves?

I imagine one of the urgent points going through AI because it evolves is to not get hung up on the newest hype cycles. We’re seeing this now with generative AI. Positive, generative AI presents important developments and may have large affect — good and dangerous. However different types of machine studying are in use at this time which might be surreptitiously making choices that straight have an effect on everybody’s capacity to train their rights. Relatively than specializing in the newest marvels of machine studying, it’s extra vital that we deal with how and the place machine studying is being utilized no matter its technological prowess.

What are some points AI customers ought to pay attention to?

AI customers ought to pay attention to points associated to knowledge privateness and safety, the potential for bias in AI decision-making and the significance of transparency in how AI programs function and make choices. Understanding these points can empower customers to demand extra accountable and equitable AI programs.

What’s one of the best ways to responsibly construct AI?

Responsibly constructing AI includes integrating moral issues at each stage of improvement and deployment. This contains various stakeholder engagement, clear methodologies, bias administration methods and ongoing affect assessments. Prioritizing the general public good and guaranteeing AI applied sciences are developed with human rights, equity and inclusivity at their core are elementary.

How can traders higher push for accountable AI?

That is such an vital query! For a very long time we by no means expressly mentioned the position of traders. I can’t categorical sufficient how impactful traders are! I imagine the trope that “regulation stifles innovation” is overused and is usually unfaithful. As an alternative, I firmly imagine smaller companies can expertise a late mover benefit and study from the bigger AI firms which were growing accountable AI practices and the steerage rising from academia, civil society and authorities. Traders have the ability to form the business’s course by making accountable AI practices a important issue of their funding choices. This contains supporting initiatives that concentrate on addressing social challenges by way of AI, selling range and inclusion throughout the AI workforce and advocating for sturdy governance and technical methods that assist to make sure AI applied sciences profit society as an entire.

See also  NuEnergy.ai secures patent on framework for responsible AI

Source link

Share This Article
Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Please enter CoinGecko Free Api Key to get this plugin works.