Manifund foxManifund
Home
Login
About
People
Categories
Newsletter
HomeAboutPeopleCategoriesLoginCreate

Funding requirements

Sign grant agreement
Reach min funding
Get Manifund approval
2

Project on optimal government hedging against labour automation

AI governance
Nathan_Barnard avatar

Nathan Barnard

ProposalGrant
Closes September 21st, 2025
$9,000raised
$9,000minimum funding
$25,000funding goal

Offer to donate

29 daysleft to contribute

You're pledging to donate if the project hits its minimum goal and gets approved. If not, your funds will be returned.

Sign in to donate

Project summary

There is a structural relationship between generating faster than exponential growth rates from AI and the labour share of national income declining (potentially asymptotically) towards 0. As long as labour bottlenecks economic growth, then labour will receive a meaningful share of the national product. But labour bottlenecking growth (in plausible exogenous growth models) prevents growth from shifting to a superexponential or hyperbolic regime.


Capital income is much more unequally distributed than labour income, which implies that, in worlds where we get faster than exponential economic growth, we’re also likely to get a significant increase in inequality, and potentially poverty given that many people have 0 or negative net financial assets. 


I’m interested in ways in which governments can try to prevent outcomes where those with no or limited assets lose out on many of the upsides from TAI without reducing growth rates, and without relying on being able to tax capital owners or AI firms voluntarily giving up large fractions of their value.


The specific avenue I’m investigating is the feasibility and optimality of governments creating a fund that pays out to unemployed workers who own no or little capital following labour automation, funded by governments selling bonds and using the revenues to buy equities (or potentially other securities correlated with the percentage of income going to capital.) 


The intuition for this approach is that, given the very unequal distribution of capital, the governments should price the risk of labour automation differently to the marginal investor. In automation states, the marginal value of fiscal resources increases both because of the need of governments to redistribute income to unemployed workers, while at the same time the welfare costs of taxing capital income increase both because the increased deadweight loss associated with capital taxation because of the larger share of the economy relies on capital. Potentially more importantly though, capital taxation has growth effects in addition to level effects in cases where automation increases the rate of economic growth either via full automation of labour or increased capital intensity in R&D. 


I’ve shown very tightly that the debt-for-equity swap is feasible and optimal specifically in the simple case in which a full automation event occurs with positive probability, and households are split into a population which earns only labour income and population that earn only capital income (paper linked here:https://www.dropbox.com/scl/fi/25aqajqz44v7oezy4a15i/TAI_insurance_8_3-60.pdf?rlkey=goo601o5dtqogxz70vmhb4gpj&st=l9vwvp6o&dl=0 .)


Over the next 6 months I aim to do the following: 

  • Generalise my results to a broader class of automation scenarios 

  • Quantify the optimal size of the hedge in a more realistic model of the economy, and the (utilitarian) welfare gains compared to funding transfers with ex post taxation 

  • Write the policy briefs based on my technical results aimed at policymakers generally and the AI governance community, and articles aimed at the general public 


An important motivation for this project, which I don’t aim to focus on in the next 6 months but nevertheless is an important motivator, is that if this policy is implemented I expect it to, in expectation, reduce the concentration of power in worlds with transformative AI by reducing increasing the power of democratic governments and individuals who own no  capital assets, relative to large shareholders in AI firms (and potentially capital owners more broadly.) 


I also think it’s plausible that this project will reduce the probability of post-TAI conflict and protect the rule of law via acting as a credible commitment on the part of governments to respect property rights by reducing the incentive to engage in ex post confiscation of assets.

What are this project's goals? How will you achieve them?


The high level goal of this project is to increase the probability that rich country governments adopt some version of the broad idea I’m developing and advocating for here. I have the following intermediate goals over the next 6 months:

  • Have a working paper up publicly (e.g on Archive, Cepr discussion papers ect) 

  • Submit a paper top 10 generalist economics journal (e.g Review of Economic Studies) or a top specialist journal (e.g Journal of Public Economics)

  • Write policy briefs published in respected think tanks in the UK and the US 

  • Have a realistic, quantitatively calibrated model that makes reasonable recommendations on the size of the hedge 

The theory of change for targeting this level of rigour is twofold: 

  1. This is a reasonably novel and potentially somewhat risky policy idea. It’s important that it’s correct on the merits - or at the very least the tradeoffs are able to be easily evaluated - and I think this requires a high level of rigour. 

  2. I aim to convince the academic macroeconomics community interested in issues around automation and public finance that this idea is correct and represents large welfare gains over ex post taxation. This requires developing the idea to the level that it will be taken seriously by this community, and requires developing the idea to the point where I can publish in a conventional, respected economics journal. 


I reasonably strongly endorse convincing academic macroeconomists as a theory of change. I think that academic macroeconomics has a good track record of convincing rich country governments to adopt their ideas vis-a-vis lots of technical policies. Examples include floating exchange rates, fiat money, counter-cyclical fiscal policy, central bank independence to facilitate countercyclical monetary policy, and the modern regulatory structure of systematically important financial institutions based around liquidity and capital requirements. 

I also aim to influence policymakers via other channels, specifically via writing non-technical policy briefs to be published in prestigious think tanks (e.g Brookings, Social Market Foundation) and working with the AI governance community (e.g the Windfall Foundation, GovAI.) 

Finally, I think there’s some chance that this is picked up by more mainstream media organisations (e.g the Economist covered the work done on using interest rates to forecast AI timelines), and a reasonable chance that I could write an article for more venues with smaller audiences that are nevertheless influential amongst key policy audiences (e.g Vox or Time which reasonably regularly publish articles written by researchers on topics relevant to transformative AI, or on blogs such as SlowBoring.)

How will this funding be used?


This funding is to pay my salary for 6 months. The minimum I need to be able to work full time on the project (in the sense that I don’t expect my productivity to be higher with respect to the project if I worked part time on the project to make more money by taking contract work, for instance) is $9000. 

This is below my minimum long term sustainable salary though which is $25,000 for 6 months. 

Who is on your team? What's your track record on similar projects?


I’m working on the core macroeconomics paper myself, supervised by Andrei Potlogea (Assistant Professor of Economics at Edinburgh, Senior Research Economist at Epoch AI.)


On the policy implementation and advocacy side, I’ll be working with Deric Cheng (Director of Research at the Windfall trust) and have various offers of various levels of concreteness to work with people at think tanks and within the UK government. 

In terms of track record, I think the strongest evidence that I’m likely to successfully write the full macroeconomics paper are the two drafts I’ve already written (linked below)


A fully rigorous version of the full automation model (linked previously, but linked again here for convenience) :


https://www.dropbox.com/scl/fi/25aqajqz44v7oezy4a15i/TAI_insurance_8_3-60.pdf?rlkey=goo601o5dtqogxz70vmhb4gpj&st=l9vwvp6o&dl=0


Draft with first general results of automation hedging and results for full automation with growth limited to exponential (not full rigorous):


https://www.dropbox.com/scl/fi/i74dyjhcfwsj3ij1cfm65/TAI_insurnace_4_4-10.pdf?rlkey=9dfhakbkhe4myd9avd0esil2x&st=4za8eq3x&dl=0


Some people who’ve expressed support for the idea are Andrei Potlogea, a Lecturer (assistant professor) in Economics at Edinburgh University and Senior Research Economist at Epoch AI, Basil Halperin, who recently received his PhD in financial economics from MIT, and was lead author on the Interest Rates Don’t Forecast AGI paper. I’ve also got support for the idea from economists and policy professionals who have general expertise in the economics of AI but don’t have the specific expertise to evaluate the proposal in detail, including Sam Manning, Senior Fellow at GovAI, who was research manager for OpenAI basic income study, and Deric Cheng, Director of Research at the Windfall trust, and organisation working on ways to ensure that the upsides from AI are broadly distributed. 


I also have experience doing policy-focused research and turning theoretical research into concrete policy recommendations. I’ve spent the 8 months contracting for Longview, working on ways in which narrow AI systems affect the risk of nuclear war, where I did foundational game theory research, and used that research to make concrete grant recommendations aimed at directly reducing nuclear risk. I’ve also done work for the Open Phil AI gov team, looking at US regulatory institutions, with a focus on financial regulation. 


Finally, I have experience doing empirical economics as research assistant for Mirko Draca (Professor of Economics University of Warwick.)

What are the most likely causes and outcomes if this project fails?


I think there’s a high chance that this project fails to influence government even conditional on my making correct decisions and executing them effectively over the next 6 months. Focusing on decisions and actions within my control though, I think the following are the most likely failure modes, in probability order:

  1. I fail to sufficiently popularise the idea in macroeconomics, AI governance, or general policymaker circles  

I’m confident I can write a paper up to the relevant standard for a specialist economics journal, and less confident but still think it’s quite likely that I write a paper up to the standard of a top generalist journal, primarily because I already have a draft that up to the sufficient standard of rigour for a section of paper in a respected journal, and I have access to supervision from an economist who’s published in top 5 journals. It’s much less clear to me that I’ll be able to popularise the idea sufficiently, given that this relies on public communication and networking skills I have less experience in and successful track record of. 

  1. I write a paper that seems correct but has some important flaw that’s missed 

I’m quite worried about this outcome because of the highly technical nature of the subject matter, and the downside risk of this outcome. The most serious (but I think really quite unlikely) downside risk is that the idea is adopted by a government but is flawed and so is harmful. I think more the more likely failure is that I convince the AI policy community of the strength of the idea, but because it’s incorrect the energy expended on it is wasted and takes away energy from other more important projects. 

  1. I fail to write a sufficiently good paper

I haven’t published in an economics journal before, and so there’s some chance that I just need more experience and mentorship to be able write a paper that’s sufficiently high quality.

How much money have you raised in the last 12 months, and from where?

I’ve been working on this project without funding for the last 6 months and am currently awaiting funding decisions from:

  • Long term future fund

  • Anthropic Economic Futures program 

  • Astral Codex 10 grants program

Comments1Offers1Similar7
offering $9,000
🐯

c

2 days ago

Committing $9k. The draft work seems high-quality and I think this is a relatively cheap bet at (a) socialising a useful idea and (b) building career capital. I appreciate that Nathan has strong mentorship/advice and has already been working on this.