AI Security IIT Delhi
We are a team of engineers and researchers dedicated to mitigating the risks from advanced artificial intelligence.
We conduct educational six weeks AI safety fellowship to bring together bright students and researchers to work on AI safety research and development.
For advanced researchers in AI safety, we may provide mentorship and compute resources to support you in advancing your research.
Research
Publications
Research Projects
[UPCOMING]
Technical AI Security Fellowship
Join us for a curated six-week AI safety fellowship at IIT Delhi to enable enthusiastic folks to get started with AI safety research and development.
Why join?
- Read leading AI research papers from OpenAI, Anthropic, and Google DeepMind.
- Get funded for your own research project in AI safety and alignment.
- Access to exclusive network of mentors and researchers at leading frontier AI labs worldwide.
- Fast-track your career in AI safety and alignment.
Our Theory of Victory
AI safety is a relevant and emerging field that will have enormous consequences in the long term. AI researchers hypothesize that AGI will likely be developed within the next three to five years (See [2], [3]) given the current rate of progress in AI research and development. There are grave consequences in deploying rogue or misaligned AI systems in public, translating to monetary losses of millions or billions (See [1]).
Despite spectacular progress in building new AI systems (See [7]), there is a glaring stagnation in the research and development of safeguards and safety protocols around these systems. According to an 80000-hour article, there were only around 300 AI safety researchers in 2022 (See [5]). The number of AI safety researchers has been increasing at about 28% per year (See [6]). Using this growth rate, one estimate suggests that the number of AI safety researchers could have increased to around 580 by 2024. There has been a rise of 315% in AI safety-related articles between 2017 and 2022 (See [4] ). However, it is still a "drop in the bucket," with merely 2% of all articles in AI/ML being directly related to safety.
What events have to take place starting today to prevent AGI from becoming a catastrophic risk to humanity and/or to mitigate the loss of monetary and human life in the near and long-term future?
We believe that our best shot at mitigating the risks of AGI is to develop safety and control protocols on a technical level and devise new policies for containing potential misuse. Given the dearth of AI safety researchers and the proven track record of IITians, this is the most opportune time for short-term investment to promote technical AI safety as a career option for IITians and to initiate them on this path.
There are 23 IITs in India, but no AI safety group exists in any of them. AI Security IIT Delhi is a student research group catering to the bright young minds in the Indian subcontinent, modeled around student safety groups at other universities in the West such as HAIST [9], Oxford AI safety initiative [10], and Berkeley AI safety initiative [11].
Supported By
Our Team
Prof. Tarun Mangla
Basil Labib
Co-organizer
Krishna Goel
Co-organizer
Arnav Raj
Researcher
Shikhar Gupta
Researcher
Ravish Jha
Researcher
Sanidhya Ojha
Researcher
Manoj K Gorle
Researcher
Gargi Rathi
Researcher
Arnav Panjla
Researcher
Indranil Bhadra
Researcher
Rishabh Jain
Researcher
Contact
If you have any questions, please contact us at iitdelhi [DOT] aisi [AT] gmail [DOT] com.