AI and Risk Mitigation

Best practices for ethical AI in healthtech, fintech, edtech and public services

The rapid development of AI technologies is making waves across industries in India and has brought about a paradigm shift. Confronting the hype, it is essential to think through the myriad of potential challenges that come with these technologies and how to navigate them. In an effort to build a safe space to discuss mitigating these challenges, Anthill Inside is organizing a series of sessions on AI and Risk Mitigation.

Under the AI and Risk Mitigation series, meet-ups, talks, and discussions on healthtech, fintech, agritech, edtech, and public services are organised. These are a mix of online and hybrid sessions. Takeaways from these sessions will be used to develop a knowledge repository in the form of practical guidelines and a self-regulated charter for Ethical AI.

We invite you to share your work with us and participate in these sessions, which can be in the form of product demos, talks, panel discussions, and workshops. This is a call for proposals for you to submit an abstract of your work in this area. The submissions will be reviewed by the editors on a rolling basis.

Previous sessions

Tracks and topics

This project covers five tracks which are listed below, along with suggested topics of discussion:

  1. Healthtech:
  • Generative AI in healthcare, especially Chatbots (how are they being used, who, for what, etc.)
  • How to evaluate an AI product for healthcare?
  1. Edtech:
  • Privacy and security challenges in AI-enhanced learning environments
  1. Public services:
  • Exploring the role of transparency and explainability in AI systems deployed in public services
  • Implementing cybersecurity measures to protect AI systems in public services
  1. Fintech:
  • Privacy best practices for handling user data
  • Fraud detection techniques and explainiability in such models
  1. Agritech:
  • Solving for Indic LLMs
  • Agri-fintech - Weather Index-based microinsurance, crop insurance, affordable loans and credit for farmers, etc.
  • People challenges - promoting a symmetric relationship between farmers and companies, skill development and encouraging farmers to use AI tools safely, etc.

Who should participate

  • Business and tech leaders from startups
  • Product Managers and Data Scientists
  • Lawyers, doctors, agronomists, teachers and curriculum developers

Plan

There will be meet-ups, talks, and roundtables hosted for the tracks regularly. These will be a mix of online and hybrid sessions. Takeaways from these sessions will be used to develop a knowledge repository in the form of practical guidelines and a self-regulated charter for Ethical AI.

About Anthill Inside

Anthill Inside is a community where topics in AI and Deep Learning such as tools and technologies, methodologies and strategies for incorporating AI and Deep Learning into various applications and businesses, and AI engineering are discussed. Furthermore, Anthill Inside places a strong emphasis on exploring and addressing ethical concerns, privacy, and the issue of bias both in practice and within AI products.

Contact Information

Follow us on Twitter at @anthillin. For queries, write to editorial@hasgeek.com or call +91-7676332020.

Hosted by

Anthill Inside is a forum for conversations about risk mitigation and governance in Artificial Intelligence and Deep Learning. AI developers, researchers, startup founders, ethicists, and AI enthusiasts are encouraged to: more

Saba

The imminent risks for non-aligned, opaque AIs

Submitted Mar 20, 2024

Background

Debanjum and I, Saba, co-founded Khoj, an open-source AI assistant, and did YCombinater last year. We were founded under a year ago with the premise that personal, consumer AI should be safe, accessible and aligned to the user’s interest. You can use Khoj from the web, desktop, Obsidian, Emacs or even WhatsApp --> https://khoj.dev.

I’ve previously scaled workplace AI assistance at Microsoft, built ETL pipelines for algorithmic trading and developed ML models for fast, early detection of rare genetic diseases at a startup.

The world is changing rapidly right now. We’ll have household names for personal AI services within the next few years. We’ll be replacing a lot of human engagement with apps. We encounter significant alignment risks in the world of dependency on AI for individuals.

I’m going to focus on issues related to applications and platforms, rather than model development.

Risks

There are several existential risks associated with deploying misaligned AI in the world

  • Issues
    • Issue #1: Owners of services may have different goal functions than the people using the services
      • We can take the example of Gemini. More details are in this blog post.
      • Depending on sector, you can run into different problems. Health, there’s risk of sensitive data being shared more broadly. Education, there’s risk of our kids learning in ways that aren’t beneficial to them. In the companion space, you could have AIs taking advantage of people.
    • Issue #2: We have insufficient observability into how well goal functions are actually being met
      • A typical launch announcement from the major foundational LLM providers is comprised of notice of a new model, followed by benchmarks over how they’ve performed on a common suite of evals. Though it’s nice having evals available, the process is still overall opaque. To bolster safety of processes, we need more public red teaming.
      • Major companies have their own internal red teaming efforts, but this is insufficient in my opinion. Experiments, where safe, should be transparent, publicly available, and demonstrated in a way that’s reproducible. Definitions in the realm of evals can still be vague, and subtle differences in testing can have a massive impact on outcomes.
    • Issue #3: In the event that systems go down, people will lose access to their assistants
      • We can see authoritarian governments like Iran, Egypt, Pakistan, Indian-Occupied Kashmir undergo internet blackouts. Most of the world relies on centralized internet providers who administer to large regions. Given the government’s influence, these ISPs can be made to shut down services. This has a massive culling effect on social movements & change.
      • Letting people stay in ownership & control of their systems increases the likelihood that they’re unaffected by environmental changes.

Mitigations

  • Potential resolutions
    • Maintain open-source AI applications. Builders should put their code out in the open and make it observable.
    • Use local models that can be run on well-equipped personal laptops
    • Work actively on redteaming your software and publish the processes, results publicly
      • Make limitations publicly available, and make an active effort to resolve security, privacy issues as they’re revealed
    • Sparingly store data, especially personal information.
      • Be aggressive about the principle of not taking on user data you do not need. Bear in mind that leaks and privacy hacks put the end users in a very vulnerable position. You should want to minimize how much data you’re storing about your users.

Risks of Open-Sourcing

  • Hurts your moat
    • Generally it’s harder to have any “secret sauce” when all of your code is out in the open. This can be slightly mitigated by making developments in a private repository prior to feature launch and merging them into the master/main public repo.
  • Harder to monetize
    • I think this is mostly in theory. In practice, most people do not want to host their own complex systems. There’s a difference between making code freely available and offering a service for free.
  • Security risk
    • This has pros and cons. Being open source makes it very likely that potential issues will be proactively caught.

Comments

{{ gettext('Login to leave a comment') }}

{{ gettext('Post a comment…') }}
{{ gettext('New comment') }}
{{ formTitle }}

{{ errorMsg }}

{{ gettext('No comments posted yet') }}

Hosted by

Anthill Inside is a forum for conversations about risk mitigation and governance in Artificial Intelligence and Deep Learning. AI developers, researchers, startup founders, ethicists, and AI enthusiasts are encouraged to: more