AI Safety Tour

Bringing AI Safety to the Public Debate.

Making the risks as clear as the opportunities.

As we reflect on the progress made in artificial intelligence thus far and look ahead, it is clear that we have reached an important moment: the rapid pace of AI innovation, while undoubtedly exciting and transformative, also represents one of the most uncertain developments in human history.
In fact, artificial intelligence presents significant existential risks, possibly in our decade; however, these risks can be effectively mitigated through ambitious public policy interventions and international cooperation.
AI Safety Tour is dedicated to mitigating those risks through the active engagement of experts and related public figures in the public debate. We have two main objectives:International cooperation: promoting strong norms and standards for AI technologies that address existential risks globally.Policy advocacy: providing policymakers with a clear understanding of the risks to make informed decisions about AI Safety policy interventions, and supporting their implementation.To ensure that AI Safety concerns become common knowledge among lawmakers and in international institutions, we must be careful to include AI Safety experts and related public figures in the conversation on AI with them, and in the media.Today, one of the most actionable measures to facilitate that conversation is providing support to independent voices already advocating for AI Safety. We aim to help experts, academics, and public figures with (i) public appearances in the media and (ii) meetings with policymakers and international organizations.Tiago & Siméon

Want to stay updated?

We'll reach out,
Thanks!

Oversight of A.I.: Rules for Artificial Intelligence

Next Tuesday, the US Senate will hold a hearing in the Senate Judiciary Subcommittee on Privacy, Technology & the Law on efforts to oversee & establish safeguards for AI. OpenAI CEO Sam Altman will testify before Congress for the first time, along with Christina Montgomery, chief privacy officer at IBM, and Gary Marcus, professor emeritus at New York University.We believe this is a great opportunity to bring awareness to some of the risks posed by AI technologies, and build common knowledge among US policymakers on possible solutions. Sam Altman and other industry leaders working on Artificial General Intelligence (AGI) have a deep awareness of those risks, so it’s important that policymakers benefit from that understanding.If you live in the United States, you can play an important role as a constituent. Please reach out to your senator who will be present at this hearing, and voice your concerns about AI safety. To help them navigate the debate, we propose a list of 10 questions, initially drafted by Siméon Campos to share with the senators present at the hearing.

You can find out if a senator from your state will be present at the hearing and how to contact them here.

Here's a script to help you call them:

It's way better to call than to only send an email as they are more likely to hear you.

Hello, my name is [Your Name] from [Your City, State]. I am calling in regard to the upcoming Senate Judiciary Subcommittee on Privacy, Technology & the Law on efforts to oversee & establish safeguards for AI. I have some concerns about the safety and ethical implications of AI development that I hope [Senator's Name] can raise at the hearing.Specifically, I would appreciate if [Senator's Name] could ask some or all of the following questions to the testifying experts, particularly Sam Altman from OpenAI:1) When do you think we will develop the first AI system that surpasses humans in all cognitive capabilities? (This is referred as “timelines” in the AI jargon)2) According to you, what are the chances that humanity goes extinct as a result of developing powerful AI systems? Is it greater than 10%?3) According to you, what are the chances that the next-generation of frontier AI systems you’re developing causes human extinction? Is it smaller than 1%?4) Beyond using AIs to solve the “alignment problem”, do you have any full plan to prevent an existential catastrophe?5) You said in a public appearance that the technology you’re developing could “maybe capture the light cone of all future value in the universe”. What do you mean by that exactly?6) Despite many AI experts warning that we should slow down, you have said multiple times that the best scenario was to have a human-level AI system very quickly (i.e. “short timelines” in AI jargon). Can you explain in details why you think AI experts are wrong and how short you mean when you say “short”?7) Why are you racing to build AGI despite acknowledging that it's the greatest threat to humanity?8) You have said multiple times that you didn’t release GPT-4 to align it. At the same time you gave the weights to Microsoft who deployed a blatantly misaligned version of GPT-4. Given the increasingly close collaboration between OpenAI & Microsoft, do you have any plan to avoid that it happens again?9) If other labs (including China) stopped developing state of the art models in a provable way until we have a tried and tested existential safety plan, would you stop training models (potentially right now)?10) AI investor Jaan Tallinn has reported that many AI engineers from top AGI labs state that the next generation of models has at least 1% chance of leading to human extinction. Would OpenAI be open to their models and systems undergoing a comprehensive risk assessment and stopping the development of frontier AI systems as long as it's not completed?These questions address key areas that I believe we need to understand for the safe and ethical development of AI technologies. Thank you for your time and for representing my concerns at the hearing.I’ll make sure to write a follow-up email that includes these questions in full for your convenience.

And here’s a message you can send them:

Subject: AI Safety Concerns for Upcoming Senate HearingDear Senator [Senator's Last Name],I hope this message finds you well. My name is [Your Name], and I am one of your constituents from [Your City, State]. I recently called your office to discuss the upcoming Senate Judiciary Subcommittee on Privacy, Technology & the Law on efforts to oversee & establish safeguards for AI.As a concerned citizen, I am writing to further express my concerns about the potential risks and ethical implications of AI technologies. I believe that this upcoming hearing presents an excellent opportunity to address these issues and create meaningful dialogue.During our phone call, I mentioned a series of questions that I, along with other members of our community, believe are critical to ask during this hearing. I am writing to provide you with this list, in the hope that you can incorporate some or all of these questions into your inquiries to the expert witnesses, especially to Sam Altman, the CEO of OpenAI.1) When do you think we will develop the first AI system that surpasses humans in all cognitive capabilities? (This is referred as “timelines” in the AI jargon)2) According to you, what are the chances that humanity goes extinct as a result of developing powerful AI systems? Is it greater than 10%?3) According to you, what are the chances that the next-generation of frontier AI systems you’re developing causes human extinction? Is it smaller than 1%?4) Beyond using AIs to solve the “alignment problem”, do you have any full plan to prevent an existential catastrophe?5) You said in a public appearance that the technology you’re developing could “maybe capture the light cone of all future value in the universe”. What do you mean by that exactly?6) Despite many AI experts warning that we should slow down, you have said multiple times that the best scenario was to have a human-level AI system very quickly (i.e. “short timelines” in AI jargon). Can you explain in details why you think AI experts are wrong and how short you mean when you say “short”?7) Why are you racing to build AGI despite acknowledging that it's the greatest threat to humanity?8) You have said multiple times that you didn’t release GPT-4 to align it. At the same time you gave the weights to Microsoft who deployed a blatantly misaligned version of GPT-4. Given the increasingly close collaboration between OpenAI & Microsoft, do you have any plan to avoid that it happens again?9) If other labs (including China) stopped developing state of the art models in a provable way until we have a tried and tested existential safety plan, would you stop training models (potentially right now)?10) AI investor Jaan Tallinn has reported that many AI engineers from top AGI labs state that the next generation of models has at least 1% chance of leading to human extinction. Would OpenAI be open to their models and systems undergoing a comprehensive risk assessment and stopping the development of frontier AI systems as long as it's not completed?These questions, we believe, are essential in informing us on the need for effective regulations that will ensure the safe and ethical development and deployment of AI. Your attention to these concerns would mean a great deal to me, and to many others who share these concerns.I am grateful for your service and for your consideration of these matters. I look forward to seeing how this hearing contributes to the broader conversation about AI safety and ethics.Thank you for your time.Sincerely,
[Your Name]