IBM is Putting Responsible AI to Work: A Keynote Presentation at ITEXPO 2024

By Alex Passett, Editor  |  February 22, 2024

Readers, our editorial team tried as hard as we could to be in multiple places at once this year so we could provide the best-possible stories from ITEXPO 2024, which took place last week at the Broward County Convention Center in Fort Lauderdale, Florida.

One super interesting keynote session that my fellow editor Greg Tavarez and I were able to attend together was led by IBM (News - Alert); specifically, Kate Soule, IBM’s Program Director for Generative AI Research.

Greg’s coverage detailed Soule’s recapped much of how Soule compared GenAI to The Good, the Bad and the Ugly. “Sure, the good is great,” Soule admitted, “but we need to assess the bad and the ugly sides so we can see the full picture and best understand what to do next.”

That said, my coverage is a more bulleted rundown of Soule’s data and how IBM is harnessing AI on deeper technical levels to make a real difference for individuals and enterprises alike.

First, IBM’s well-trusted and transparent data curation process for GenAI development; its stages include:

  • Dataset Acquisition – collection and extraction through key domain experts, IBM Legal, Responsible Business Alignment (IBM Procurement), and IBM’s AI Ethics Board
  • Dataset Processing – a model-agnostic split of data deduplication (i.e. document ID generation, sentence splitting, etc.) and data annotation (i.e. language detection, hate, abuse and profanity detection, document quality monitoring, URL blocklisting, etc.)
  • Data Preprocessing – mainly data filtering and tokenization

“The bigger the models that rake in and utilize data,” Soule explained, “the bigger the risks and costs. And LLM costs are, indeed, growing at unsustainable rates worldwide; our computing appetites are outstripping supply, and there are more and more toxic individuals that are being trusted with enterprise AI that frankly shouldn’t be. So, we’re studying how to keep fairness and trustworthiness in the AI world alive. We need to eliminate the risks.”

Soule then dissected additional model alignment details (i.e. regarding training to improve model safety).

Potential AI/LLM risks that IBM assesses branch into a number of factors and uses that result in negative outcomes; human-chatbot interaction harm potential, malicious uses by bad actors, calculated spread of misinformation hazards, and flags for discrimination, exclusion, toxicity, and so on:

  • Human-Chatbot Interaction Harms – factors like AI overreliance and misuse, inappropriate psychological consults and/or diagnosing emotional coping strategies with the AI, asks for personal user information
  • Malicious Uses – encouraging disinformation campaigns, assistance in illegal activities, spam content, financial crimes, illegal trade, illegitimate surveillance and/or censorship, copyright infringements, defamatory content and abuse, endorsements of cyberbullying and other like harassment
  • Misinformation Hazards – the dissemination of false or misleading data that causes material harm, misinformation bait that leads to leaks of sensitive government information, propaganda, access to unreliable expert and/or advice
  • Discrimination, Exclusion, Toxicity – hate speech, exposure to obscene content, body shaming, racial/ethnic discrimination, gender/sexual discrimination, targeting disabilities, explicit graphics or violence

IBM’s teams are evaluating all of this and more to, as stated, make real change happen.

“IBM’s core principles now revolve around the responsible implementations of AI – intelligence that is open, properly governed, and designed to create enterprise value without the heavy risk of imperiling users,” Soule concluded. “IBM’s AI is transparent and designed to support a broader and safer ecosystem of innovation.”




Edited by Alex Passett
Get stories like this delivered straight to your inbox. [Free eNews Subscription]