AI and Risk Mitigation
Best practices for ethical AI in healthtech, fintech, edtech and public services
The rapid development of AI technologies is making waves across industries in India and has brought about a paradigm shift. Confronting the hype, it is essential to think through the myriad of potential challenges that come with these technologies and how to navigate them. In an effort to build a safe space to discuss mitigating these challenges, Anthill Inside is organizing a series of sessions on AI and Risk Mitigation.
Under the AI and Risk Mitigation series, meet-ups, talks, and discussions on healthtech, fintech, agritech, edtech, and public services are organised. These are a mix of online and hybrid sessions. Takeaways from these sessions will be used to develop a knowledge repository in the form of practical guidelines and a self-regulated charter for Ethical AI.
We invite you to share your work with us and participate in these sessions, which can be in the form of product demos, talks, panel discussions, and workshops. This is a call for proposals for you to submit an abstract of your work in this area. The submissions will be reviewed by the editors on a rolling basis.
This project covers five tracks which are listed below, along with suggested topics of discussion:
There will be meet-ups, talks, and roundtables hosted for the tracks regularly. These will be a mix of online and hybrid sessions. Takeaways from these sessions will be used to develop a knowledge repository in the form of practical guidelines and a self-regulated charter for Ethical AI.
Anthill Inside is a community where topics in AI and Deep Learning such as tools and technologies, methodologies and strategies for incorporating AI and Deep Learning into various applications and businesses, and AI engineering are discussed. Furthermore, Anthill Inside places a strong emphasis on exploring and addressing ethical concerns, privacy, and the issue of bias both in practice and within AI products.
Follow us on Twitter at @anthillin. For queries, write to editorial@hasgeek.com or call +91-7676332020.
Hosted by
Saba
Debanjum and I, Saba, co-founded Khoj, an open-source AI assistant, and did YCombinater last year. We were founded under a year ago with the premise that personal, consumer AI should be safe, accessible and aligned to the user’s interest. You can use Khoj from the web, desktop, Obsidian, Emacs or even WhatsApp --> https://khoj.dev.
I’ve previously scaled workplace AI assistance at Microsoft, built ETL pipelines for algorithmic trading and developed ML models for fast, early detection of rare genetic diseases at a startup.
The world is changing rapidly right now. We’ll have household names for personal AI services within the next few years. We’ll be replacing a lot of human engagement with apps. We encounter significant alignment risks in the world of dependency on AI for individuals.
I’m going to focus on issues related to applications and platforms, rather than model development.
There are several existential risks associated with deploying misaligned AI in the world
Hosted by
{{ gettext('Login to leave a comment') }}
{{ gettext('Post a comment…') }}{{ errorMsg }}
{{ gettext('No comments posted yet') }}