The 2nd International Workshop on AI Governance (AIGOV)
Held in conjunction with AAAI 2025
A workshop which aims to delve into the critical aspects of AI governance with a specific focus on the contribution of Large Language Models (LLMs) in shaping ethical and responsible AI practices.
[Call for Reviewers]
[Call for Papers]
[Last Event: AIGOV @ IJCAI 2024]
Introduction
We are excited to announce the 2nd International Workshop on “AI Governance: Alignment, Morality and Law” (AIGOV) at the AAAI 2025, a critical event addressing the principles, frameworks, and best practices necessary to navigate the ethical, legal, and societal dimensions of AI.
The rapid advancements in Artificial Intelligence (AI) bring unprecedented opportunities and challenges. As organizations and governments increasingly integrate AI technologies into various aspects of society, the need for effective AI governance becomes paramount. This workshop aims to provide a comprehensive understanding of AI governance principles, frameworks, and best practices to empower participants in navigating the ethical, legal, and societal dimensions of AI.
In addition to insightful technical discussions, we also plan to hold introductory talks to educate a broader community about the importance and urgency of AI governance. These talks will serve as a foundational introduction to the subject matter and its implications for society at large. Some topics of interest include but not limited to:
- Understanding AI Governance: Define the core principles of AI governance. Explore existing regulations and ethical frameworks
- Role of LLMs in AI Governance: Highlight the capabilities of LLMs in analyzing and generating human-readable content. Discuss how LLMs can contribute to drafting policies and guidelines.
- Addressing Bias and Fairness: Examine the challenges of bias in AI systems. Showcase how LLMs can assist in identifying and mitigating bias.
- Transparency and Accountability: Explore the importance of transparency in AI decision-making. Discuss how LLMs can aid in creating understandable and accountable AI systems.
- Policy Interpretation and Compliance: Illustrate how LLMs can assist policymakers in understanding complex technical concepts. Discuss the role of LLMs in monitoring and auditing AI systems for compliance with governance standards.
- Human-AI Collaboration: Emphasize the need for human oversight in AI governance. Discuss the collaborative approach between humans and LLMs.
- AI alignment methods
- Prompt engineering
- Fine tuning
- Explainable AI
- Responsible AI practices
We believe that incorporating knowledge can potentially solve many of the most pressing challenges tackling the AI and society today. The primary goal of this workshop is to facilitate community building, as AI governance is a critical field with two distinct communities: policymakers crafting regulations and researchers/developers working on the technologies. The workshop aims to bring these communities together to foster collaboration and enhance understanding. The workshop aims to educate a broader community about the intricacies of AI governance. Through informative sessions and discussions, participants will gain a deeper understanding of the challenges posed by AI technologies and the need for effective governance.
This workshop will be a hybrid event held in conjunction with AAAI 2025, taking place on Mar 3rd, 2025 at Philadelphia, PA, USA. The session will cover invited talks, contributed talks, posters, and a panel discussion.
Key Dates
- Submission deadline: Nov 30th, 2024 (11:59 pm AOE)
- Acceptance notification: Dec 30th, 2024
- Camera ready for accepted submissions: Jan 20th, 2025
Confirmed Keynote and Invited Speakers
Organizing Committee
Technical Program Committee (TPC)
We would like to express our sincere gratitude to our technical program committee for generously volunteering their time and expertise to review submissions for our workshop. Their valuable contributions have been instrumental in ensuring the quality and rigor of the workshop’s program. We deeply appreciate their dedication and commitment to our workshop’s success:
Tianhao Li, Ruixiang Qi, Tianliang Yao, Zecheng Zhang, Rahul Jain, Jiacheng Lu, Saai Krishnan Udayakumar, Rishit Dholakia, Haocheng Bi, Botao Zhang, Hejun Huang, Ahmed Olabisi Olajide, Mahak Shah, Chandrashekar Konda, Zhun Zhou, Zhengyu Fang, Gaurav Mishra, Rohan Kulkarni, Prithviraj Dasgupta, Imran Nasim, Akshata Kishore Moharir, Harsh NILESH PATHAK, Zhoujie Ding, Deepak Jayabalan, Utsha Saha, Aashish Sheshadri, Yuan Tian, YI HAN, Zonghao Ying, Ziyi Wang, Jahnavi Anilkumar Kachhia, Matin Khajavi, Haohang Li, Yuanjian Xu, Akaash Vishal Hazarika, Benedikt Schesch, Yue Li, Jiajing Chen, Bum Jun Kim, Junlin Guo, Qinfeng Zhu, Ramya Sree Boppana, KE WANG, Martin Meinke, Zijian Zhang, Ye Zhang, Amit Agarwal, Moncef GAROUANI, Songlin Jiang, Madiha Shakil Mirza, Abuh Ibrahim Sani, Sai Prasanna Teja Reddy Bogireddy, Pallavi Gudipati, Xiaoxia Lei, Sai Tarun Kaniganti, Victoria Abosede Ogunsanya, Shashikanta Sahoo, Maryam Taeb, Jue Xiao, Prudhvi Nethi, Rianat Abbas, Madhulekha Arunmozhi, Bala Siva Sai Akhil Malepati, Hengyang Zhou, Kshitij Chandna, Kai Xi, Ojas Gupta, Yidi Xu, Zong Ke, Vishal Shah, Chloe Zhu, Mukesh Yadav, GUANGYAN GAN, Yu Ma, Karanbir Singh, Feng Chen, Qi Song, Zhao Xu, Junlong Aaron Zhou, Chandrasekar Ramachandran, Venkat Nilesh Dadi, Linjun He, Ruchi Sharma, Botao Zhang, Bharath Mummadisetty, Reza Zakerian, Xihao Xie, Zerui Wang
Contact
For any questions, please contact us at blin@law.harvard.edu.
Sponsors
- Harvard Law School
- Icahn School of Medicine at Mount Sinai
- IBM Research
- Mila - Quebec AI Institute
- Fordham University