Manifund foxManifund
Home
Login
About
People
Categories
Newsletter
HomeAboutPeopleCategoriesLoginCreate
blaineaisafetytokyo avatarblaineaisafetytokyo avatar
Blaine William Rogers

@blaineaisafetytokyo

https://aisafety.tokyo
$601total balance
$1charity balance
$600cash balance

$0 in pending offers

Projects

Year one of AI Safety Tokyo

Comments

Year one of AI Safety Tokyo
blaineaisafetytokyo avatar

Blaine William Rogers

4 days ago

Progress update

What progress have you made since your last update?

In our second and going into our third year, AI Safety Tokyo has gone from strength to strength:

  • 54 new people attended our seminars, meaning we’ve now reached 100 people with at least 2 hours of high-quality AI Safety discourse since our founding in early 2023. 16 of those people attended more than one seminar, bringing the number of repeat attendees to 52. We have 13 regular members (people who have attended 2 of the last 3 seminars) and reliably average 7 attendees for each seminar.

  • In collaboration with the short-lived AI Industry Foundation, we ran another 14 seminars aimed at helping business people prepare, navigate and safely benefit from the generative AI revolution, reaching another 31 people with less technical AI Safety content. 

  • In collaboration with Noeon Research we organized two international conferences on AI Safety in Japan, TAIS 2024 (https://tais2024.cc, writeup) and TAIS 2025 (https://tais2025.cc), connecting new AI Safety researchers in Japan (Japan AISI, U Tokyo, Nihon U, Shibaura IoT, NII, RIKEN, etc) with established researchers in the West (Anthropic, Google DeepMind, CAIS, CHAI @ Berkeley, FAR.AI, PIBBSS, Oxford, Cambridge, etc) and providing a consistent venue for AI Safety research in Japan.

  • AI Safety Tokyo members have now attended several events with Japan AISI, putting us in contact with key decision makers and regulators; as I argue in ICRES Student Session on Alignment this is especially important, since Japan is otherwise very accelerationist

  • We are the second result on Google for “ai safety japan” and the first English-language result; we regularly receive emails from researchers passing through Tokyo and are able to make introductions to other organizations like ALIGN or CIRL @ U Tokyo.

  • We survived a soft change in leadership when our founder moved back to the UK in late 2024.

  • OpenPhilanthropy call us out on their website as the kind of work they’d be interested in funding (though we have never actually been funded by OpenPhil).

What are your next steps?

  • AI Safety Tokyo will continue to run monthly seminars on topics in AI safety (technical safety, regulation, etc) for the foreseeable future.

  • We are looking into organising another TAIS conference next year (subject to collaboration and sponsorship).

Is there anything others could help you with?

AI Safety Tokyo is perhaps unique among local AI safety orgs in that we mostly reach mid-career professionals and academics, as opposed to students and fresh grads. Mid-career professionals are generally more skilled than students and are thereby poised to make a larger impact with their career transitions, but care more about medium-term career prospects and job security. Many of our regular members are interested in careers in safety and are merely lacking for good opportunities to pursue those careers in Japan. If you

  • work at a Japanese university and are interested in supervising new PhD students on safety topics

  • work for a Japanese company and are looking to hire someone to help your company comply with the Japan AI Guidelines for Business, the EU AI Act, the US Executive Order on AI or other AI regulation

  • work for a multinational company with an interest in safety and an office in Tokyo (anthropic, google deepmind, openai, etc)

please get in touch with someone@aisafety.tokyo.

TAIS 2026 is looking for sponsors and collaborators; please see https://tais2025.cc/ for a preliminary list of sponsorship tiers and benefits and contact someone@aisafety.tokyo if interested.

AI Safety Tokyo lacks more for people with time and energy to contribute to big projects than it lacks for money; if you live in Tokyo and would like to help with our fieldbuilding efforts, we want to meet you.

In as much as people’s time and energy can be bought with money, we also lack for money; there are projects we could pursue (AI safety bootcamps / retreats, research management, etc) if we could pay someone to work at AI Safety Tokyo full time. Contact someone@aisafety.tokyo if you’re interested in a more detailed pitch of what we could do with more money and we’ll write you one.

Year one of AI Safety Tokyo
blaineaisafetytokyo avatar

Blaine William Rogers

over 1 year ago

@Jason Answering more directly: all of the success metrics in the certificate description, except those explicitly called out as not being attributable to AI Safety Tokyo, are fairly attributable to my work (number of sessions run, number of new HEAs, existence of TAIS 2024, etc).

🧡
Year one of AI Safety Tokyo
blaineaisafetytokyo avatar

Blaine William Rogers

over 1 year ago

@Jason I guess it's the job of the oracular funder to deduce how much of the impact is attributable to my work? And the job of investors to guess what conclusion the funder will come to and buy shares based on that? This credit assignment problem exists for all impact markets; if a research team funded by an impact market creates a new kind of cheap lab grown meat, and at the same time another team lobbies for animal right laws that make traditional meat more expensive, who is responsible for lowering meat consumption? How much of the decrease is attributable to the impact certificate funded team?

Year one of AI Safety Tokyo
blaineaisafetytokyo avatar

Blaine William Rogers

over 1 year ago

@Chris-Lakin I agree that this certificate is not a great fit for the manifund model; I submitted this as an ACX grant and ticked the box because why not. Here, investors are betting purely on whether or not a retroactive funder will buy the certificate from them at a markup, not predicting whether the project will be successful. I guess investors might also be acting as retroactive funders themselves. I tried to minimize the double-funding problem by retaining as much of the equity as Manifund allows (99%). I set the minimum valuation such that the Manifund interface showed the value of the certificate as being equal to the going rate for group organizers given by OpenPhil and CEA, but I don't know much about how Manifund works. Should I instead have released 100% of the shares to the market and set the minimum valuation as low as possible?

Transactions

ForDateTypeAmount
Year one of AI Safety Tokyoover 1 year agouser to user trade+545
Year one of AI Safety Tokyoover 1 year agouser to user trade+50
Year one of AI Safety Tokyoover 1 year agouser to user trade+5
<10bd8a14-4002-47ff-af4a-92b227423a74>over 1 year agotip+1