We organized adversarial collaborations between subject-matter experts and expert forecasters with opposing views on whether recent advances in Artificial Intelligence (AI) pose an existential threat to humanity in the 21st century. Two studies incentivized participants to engage in respectful perspective-taking, to share their strongest arguments, and to propose early-warning indicator questions (cruxes) for the probability of an AI-related catastrophe by 2100. AI experts saw greater threats from AI than did expert forecasters, and neither group changed its long-term risk estimates, but they did preregister cruxes whose resolution by 2030 would sway their views on long-term risk. These persistent differences shrank as questioning moved across centuries, from 2100 to 2500 and beyond, by which time both groups put the risk of extreme negative outcomes from AI at 30%–40%. Future research should address the generalizability of these results beyond our sample to alternative samples of experts, and beyond the topic area of AI to other questions and time frames.
