What progress have you made since your last update?
In our second and going into our third year, AI Safety Tokyo has gone from strength to strength:
54 new people attended our seminars, meaning we’ve now reached 100 people with at least 2 hours of high-quality AI Safety discourse since our founding in early 2023. 16 of those people attended more than one seminar, bringing the number of repeat attendees to 52. We have 13 regular members (people who have attended 2 of the last 3 seminars) and reliably average 7 attendees for each seminar.
In collaboration with the short-lived AI Industry Foundation, we ran another 14 seminars aimed at helping business people prepare, navigate and safely benefit from the generative AI revolution, reaching another 31 people with less technical AI Safety content.
In collaboration with Noeon Research we organized two international conferences on AI Safety in Japan, TAIS 2024 (https://tais2024.cc, writeup) and TAIS 2025 (https://tais2025.cc), connecting new AI Safety researchers in Japan (Japan AISI, U Tokyo, Nihon U, Shibaura IoT, NII, RIKEN, etc) with established researchers in the West (Anthropic, Google DeepMind, CAIS, CHAI @ Berkeley, FAR.AI, PIBBSS, Oxford, Cambridge, etc) and providing a consistent venue for AI Safety research in Japan.
AI Safety Tokyo members have now attended several events with Japan AISI, putting us in contact with key decision makers and regulators; as I argue in ICRES Student Session on Alignment this is especially important, since Japan is otherwise very accelerationist
We are the second result on Google for “ai safety japan” and the first English-language result; we regularly receive emails from researchers passing through Tokyo and are able to make introductions to other organizations like ALIGN or CIRL @ U Tokyo.
We survived a soft change in leadership when our founder moved back to the UK in late 2024.
OpenPhilanthropy call us out on their website as the kind of work they’d be interested in funding (though we have never actually been funded by OpenPhil).
What are your next steps?
AI Safety Tokyo will continue to run monthly seminars on topics in AI safety (technical safety, regulation, etc) for the foreseeable future.
We are looking into organising another TAIS conference next year (subject to collaboration and sponsorship).
Is there anything others could help you with?
AI Safety Tokyo is perhaps unique among local AI safety orgs in that we mostly reach mid-career professionals and academics, as opposed to students and fresh grads. Mid-career professionals are generally more skilled than students and are thereby poised to make a larger impact with their career transitions, but care more about medium-term career prospects and job security. Many of our regular members are interested in careers in safety and are merely lacking for good opportunities to pursue those careers in Japan. If you
work at a Japanese university and are interested in supervising new PhD students on safety topics
work for a Japanese company and are looking to hire someone to help your company comply with the Japan AI Guidelines for Business, the EU AI Act, the US Executive Order on AI or other AI regulation
work for a multinational company with an interest in safety and an office in Tokyo (anthropic, google deepmind, openai, etc)
please get in touch with someone@aisafety.tokyo.
TAIS 2026 is looking for sponsors and collaborators; please see https://tais2025.cc/ for a preliminary list of sponsorship tiers and benefits and contact someone@aisafety.tokyo if interested.
AI Safety Tokyo lacks more for people with time and energy to contribute to big projects than it lacks for money; if you live in Tokyo and would like to help with our fieldbuilding efforts, we want to meet you.
In as much as people’s time and energy can be bought with money, we also lack for money; there are projects we could pursue (AI safety bootcamps / retreats, research management, etc) if we could pay someone to work at AI Safety Tokyo full time. Contact someone@aisafety.tokyo if you’re interested in a more detailed pitch of what we could do with more money and we’ll write you one.