Close Menu

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    The Financial Crisis That Didn’t Happen

    April 30, 2026

    Trump Holds Talks On Prolonged Iran Blockade, Urges Tehran To Reach A Deal

    April 30, 2026

    Roger Sweet, Creator of the He-Man Action Figure, Dies at 91

    April 30, 2026
    Facebook X (Twitter) Instagram
    Trending
    • The Financial Crisis That Didn’t Happen
    • Trump Holds Talks On Prolonged Iran Blockade, Urges Tehran To Reach A Deal
    • Roger Sweet, Creator of the He-Man Action Figure, Dies at 91
    • A Longer Life Can Lead to Financial Concerns, and More Questions
    • Labor Secretary’s Departure Gives Trump an Opening to Focus on His Agenda
    • Chris Hayes Nails How Trump’s ‘Mad King Behavior’ Is A Distraction From 1 ‘Enormous’ Thing
    • The Entire ‘Avatar’ Franchise Explained in 10 Interactive Infographics
    • 21 Reasons People Cut Off Their Toxic Parents
    Facebook X (Twitter)
    SBM Global News
    Demo
    • Home
    • Top Stories
      • Politics
    • Business
      • Small Business
      • Marketing
    • Finance
      • Investment
    • Technology

      Nevina Infotech Pvt. Ltd. – Company Profile

      April 30, 2026
      Read More

      Amazon is already offering new OpenAI products on AWS

      April 29, 2026
      Read More

      Technbrains – Company Profile – AllBusiness.com

      April 28, 2026
      Read More

      Truecaller faces mounting pressures as its growth matures

      April 27, 2026
      Read More

      OpenAI CEO apologizes to Tumbler Ridge community

      April 26, 2026
      Read More
    • Lifestyle
      • Travel
    • Feel Good
    • Get In Touch
    SBM Global News
    Demo
    Home»Technology»Women in AI: Heidy Khlaaf, safety engineering director at Trail of Bits
    Technology

    Women in AI: Heidy Khlaaf, safety engineering director at Trail of Bits

    By Staff WriterMarch 11, 20246 Mins Read
    Facebook Twitter LinkedIn Reddit Email
    #image_title
    Share
    Facebook Twitter LinkedIn Pinterest Email

    To give AI-focused women academics and others their well-deserved — and overdue — time in the spotlight, TechCrunch is launching a series of interviews focusing on remarkable women who’ve contributed to the AI revolution. We’ll publish several pieces throughout the year as the AI boom continues, highlighting key work that often goes unrecognized. Read more profiles here.

    Heidy Khlaaf is an engineering director at the cybersecurity firm Trail of Bits. She specializes in evaluating software and AI implementations within “safety critical” systems, like nuclear power plants and autonomous vehicles.

    Khlaaf received her computer science Ph.D. from the University College London and her BS in computer science and philosophy from Florida State University. She’s led safety and security audits, provided consultations and reviews of assurance cases and contributed to the creation of standards and guidelines for safety- and security -related applications and their development.

    Q&A

    Briefly, how did you get your start in AI? What attracted you to the field?

    I was drawn to robotics at a very young age, and started programming at the age of 15 as I was fascinated with the prospects of using robotics and AI (as they’re inexplicably linked) to automate workloads where they’re most needed. Like in manufacturing, I saw robotics being used to help the elderly — and automate dangerous manual labour in our society. I did however receive my Ph.D. in a different sub-field of computer science, because I believe that having a strong theoretical foundation in computer science allows you to make educated and scientific decisions into where AI may or may not be suitable, and where pitfalls may be.

    What work are you most proud of (in the AI field)?

    Using my strong expertise and background in safety engineering and safety-critical systems to provide context and criticism where needed on the new field of AI “safety.” Although the field of AI safety has attempted to adapt and cite well-established safety and security techniques, various terminology has been misconstrued in its use and meaning. There is a lack of consistent or intentional definitions that do compromise the integrity of the safety techniques the AI community is currently using. I’m particularly proud of “Toward Comprehensive Risk Assessments and Assurance of AI-Based Systems” and “A Hazard Analysis Framework for Code Synthesis Large Language Models” where I deconstruct false narratives about safety and AI evaluations, and provide concrete steps on bridging the safety gap within AI.

    How do you navigate the challenges of the male-dominated tech industry, and, by extension, the male-dominated AI industry?

    Acknowledgment of how little the status quo has changed is not something we discuss often, but I believe is actually important for myself and other technical women to understand our position within the industry and hold a realistic view on the changes required. Retention rates and the ratio of women holding leadership positions has remained largely the same since I joined the field, and that’s over a decade ago. And as TechCrunch has aptly pointed out, despite tremendous breakthroughs and contributions by women within AI, we remain sidelined from conversations that we ourselves have defined. Recognizing this lack of progress helped me understand that building a strong personal community is much more valuable as a source of support rather than relying on DEI initiatives that unfortunately have not moved the needle, given that bias and skepticism towards technical women is still quite pervasive in tech.

    What advice would you give to women seeking to enter the AI field?

    Not to appeal to authority and to find a line of work that you truly believe in, even if it contradicts popular narratives. Given the power AI labs hold politically and economically at the moment, there is an instinct to take anything AI “thought leaders” state as fact, when it is often the case that many AI claims are marketing speak that overstate the abilities of AI to benefit a bottom line. Yet, I see significant hesitancy, especially among junior women in the field, to vocalise skepticism against claims made by their male peers that cannot be substantiated. Imposter syndrome has a strong hold on women within tech, and leads many to doubt their own scientific integrity. But it is more important than ever to challenge claims that exaggerate the capabilities of AI, especially those that are not falsifiable under the scientific method.

    What are some of the most pressing issues facing AI as it evolves?

    Regardless of the advancements we’ll observe in AI, they will never be the singular solution, technologically or socially, to our issues. Currently there is a trend to shoehorn AI into every possible system, regardless of its effectiveness (or lack thereof) across numerous domains. AI should augment human capabilities rather than replace them, and we are witnessing a complete disregard of AI’s pitfalls and failure modes that are leading to real tangible harm. Just recently, an AI system ShotSpotter recently led to an officer firing at a child.

    What are some issues AI users should be aware of?

    Demo

    How truly unreliable AI is. AI algorithms are notoriously flawed with high error rates observed across applications that require precision, accuracy and safety-criticality. The way AI systems are trained embed human bias and discrimination within their outputs that become “de facto” and automated. And this is because the nature of AI systems is to provide outcomes based on statistical and probabilistic inferences and correlations from historical data, and not any type of reasoning, factual evidence or “causation.”

    What is the best way to responsibly build AI?

    To ensure that AI is developed in a way that protects people’s rights and safety through constructing verifiable claims and hold AI developers accountable to them. These claims should also be scoped to a regulatory, safety, ethical or technical application and must not be falsifiable. Otherwise, there is a significant lack of scientific integrity to appropriately evaluate these systems. Independent regulators should also be assessing AI systems against these claims as currently required for many products and systems in other industries — for example, those evaluated by the FDA. AI systems should not be exempt from standard auditing processes that are well-established to ensure public and consumer protection.

    How can investors better push for responsible AI?

    Investors should engage with and fund organisations that are seeking to establish and advance auditing practices for AI. Most funding is currently invested in AI labs themselves, with the belief that their safety teams are sufficient for the advancement of AI evaluations. However, independent auditors and regulators are key to public trust. Independence allows the public to trust in the accuracy and integrity of assessments and the integrity of regulatory outcomes.

    View original article here

    Share. Facebook Twitter LinkedIn Email Reddit
    Previous ArticleReasons to visit the island of Sicily: An unforgettable journey through history and natural beauty
    Next Article 50 Ecommerce Statistics To Know in 2024 [New Data]

    Related Posts

    Nevina Infotech Pvt. Ltd. – Company Profile

    April 30, 2026
    Read More

    Amazon is already offering new OpenAI products on AWS

    April 29, 2026
    Read More

    Technbrains – Company Profile – AllBusiness.com

    April 28, 2026
    Read More
    Add A Comment

    Leave A Reply Cancel Reply

    Demo
    Top Posts

    Former FBI, CIA Head Has ‘Serious Concerns’ With Trump Cabinet Picks

    December 28, 2024435

    Emirates to operate next-gen A350 on the third daily service to Cape Town

    January 14, 2026256

    AAVE Price Prediction: Target $215-225 by Mid-January 2025 as Technical Indicators Signal Bullish Momentum

    December 15, 2025240

    Ventive Hospitality Joins Green Fins: Strong ESG Lift

    February 17, 2026211
    Don't Miss
    Investment

    The Financial Crisis That Didn’t Happen

    By Staff WriterApril 30, 20264 Mins Read

    One of the hard parts of understanding market cycles is the fact that there are…

    Read More

    Trump Holds Talks On Prolonged Iran Blockade, Urges Tehran To Reach A Deal

    April 30, 2026

    Roger Sweet, Creator of the He-Man Action Figure, Dies at 91

    April 30, 2026

    A Longer Life Can Lead to Financial Concerns, and More Questions

    April 30, 2026
    Stay In Touch
    • Facebook
    • Twitter
    Demo
    About Us

    Small Business Minder brings together business and related news from around the world in one place. Follow us for all the business news you'll need.

    Facebook X (Twitter)
    Our Picks

    The Financial Crisis That Didn’t Happen

    April 30, 2026

    Trump Holds Talks On Prolonged Iran Blockade, Urges Tehran To Reach A Deal

    April 30, 2026
    Most Popular

    Former FBI, CIA Head Has ‘Serious Concerns’ With Trump Cabinet Picks

    December 28, 2024435

    Emirates to operate next-gen A350 on the third daily service to Cape Town

    January 14, 2026256
    © 2026 Small Business Minder
    • Home
    • Get In Touch

    Type above and press Enter to search. Press Esc to cancel.

    Ad Blocker Enabled!
    Ad Blocker Enabled!
    Our website is made possible by displaying online advertisements to our visitors. To get the most from our site, please disable your Ad Blocker.