copy and paste this google map to your website or blog!
Press copy button and paste into your blog or website.
(Please switch to 'HTML' mode when posting into your blog. Examples: WordPress Example, Blogger Example)
Paul Christiano - Wikipedia Paul Christiano is an American researcher in the field of artificial intelligence (AI), with a specific focus on AI alignment, which is the subfield of AI safety research that aims to steer AI systems toward human interests [1]
Paul Christiano I am the head of AI safety at the Center for AI Standards and Innovation within NIST I previously ran the Alignment Research Center and the language model alignment team at OpenAI Before that I received my PhD in statistical learning theory from UC Berkeley
Paul Christiano | NIST Paul Christiano is head of AI safety for the U S Artificial Intelligence Safety Institute In this role, he will design and conduct tests of frontier AI models, focusing on model evaluations for capabilities of national security concern
Paul Christiano - Google Scholar Paul Christiano National Institute of Standards and Technology Verified email at nist gov - Homepage Artificial Intelligence
Paul Christiano on how OpenAI is developing real solutions to the AI . . . Paul Christiano is one of the smartest people I know and this episode has one of the best explanations for why AI alignment matters and how we might solve it After our first session produced such great material, we decided to do a second recording, resulting in our longest interview so far
AI alignment – Paul Christiano Since March 2021 I have been running the Alignment Research Center From January 2017-January 2021 I worked on the safety team at OpenAI I am an advisor to the UK AI Safety Institute, a trustee of Anthropic’s Long-Term Benefit Trust, and on assorted other boards and advisory panels