AI Safety Advocates

UP QUOTES

“Safely aligning a powerful AGI is difficult.”

Eliezer Yudkowsky

We believe that unaligned AI poses the greatest risk to our species that has ever existed. 

 

We have preciously little time to steer transformative AI so that it does not unintentionally or intentionally result in the extinction or subjugation of all life as we know it. It’s an extraordinarily risky technology, with enormous upsides iff developed with extreme caution and wisdom. 

An Upgrade Program (“UP”) helps you improve your entire life and systematically achieve your highest value goals. If you also share our existential worries about unaligned AI, that means you likely share similar goals. Namely, surviving and thriving in an AI-dominated world.

You can do your UP on your own for free or, if you find it cost-effective, use our aligned coaches, assistants, and wide array of specialists.

With AI safety advocates, we generally do Max UPs that are between 1 day and 1 year long focusing on measurably improving whatever is most valuable for your AI safety work output. That varies person to person, but generally revolves around your productivity, emotional wellbeing, physical wellbeing, financial security, personal safety, and/or emergency preparedness. We have science-based, partly outsourceable plans (including carefully curated tools, assessments, and resources) for optimizing these and every other area of your life.

For safety-focused AI entrepreneurs, we also have a large set of meticulously developed startup templates, sample files, tools, and procedures and can build much of your organization with or for you. Our team has built 25+ organizations, cumulatively operating for 200+ years. This includes Effective Altruism Global and Effective Altruism Ventures, which both advocated for AI safety since 2015. And we built or led Singapore Futurists and Bay Area Futurists since 2012. 

Think of us like your personal J.A.R.V.I.S. or Alfred Pennyworth. We can help you with almost anything you need, whenever you need it, as long as it helps you do your work more efficiently and effectively. We aim for maximum counterfactually-adjusted net impact. See some examples.

If you could use our help, please reach out. We’d love to help you achieve your AI safety goals.

If you want to help subsidize UPs for financially-constrained AI safety advocates, please consider donating.