Manifund foxManifund
Home
Login
About
People
Categories
Newsletter
HomeAboutPeopleCategoriesLoginCreate
sheikheddy avatarsheikheddy avatar
Sheikh Abdur Raheem Ali

@sheikheddy

Independent Alignment Researcher based in Toronto

https://www.linkedin.com/in/abdur-raheem-ali/
$71total balance
$71charity balance
$0cash balance

$0 in pending offers

Projects

Emergency travel funding to attend EA Global: New York 2025
[Urgent] Travel funding to present research at ICLR and GovAI Summit
Interpretable Forecasting with Transformers

Outgoing donations

PIBBSS - General Programs funding or specific funding
$10
about 1 year ago
Act I: Exploring emergent behavior from multi-AI, multi-human interaction
$20
about 1 year ago

Comments

Emergency travel funding to attend EA Global: New York 2025
sheikheddy avatar

Sheikh Abdur Raheem Ali

2 days ago

# EA Global NYC 2025 - Travel Grant Report

## Overview

Thanks to the $500 travel grant, I was able to attend EA Global NYC in October 2025. The grant provided partial funding for the trip, covering return bus tickets from Toronto. While total expenses exceeded this amount, the grant was essential - without this support, I wouldn't have been able to attend the conference.

## Activities and Outcomes

I scheduled 47 one-on-one meetings over the three-day conference, spanning technical AI safety research, policy, infrastructure, and organizational strategy.

### Research Outputs

The most immediate concrete outcome was a follow-up experiment on stated vs revealed preferences in LLMs. After discussing this topic with a researcher on Friday, I ran the experiment that evening and drafted a preliminary writeup. The setup investigates how models respond to token limits - initially accepting constraints while expressing frustration, then attempting to circumvent them, and finally adjusting behavior after negotiation. This is a relatively clean testbed for studying model preferences compared to more complex setups.

I also provided technical feedback to a researcher who recently received a $1M grant for evaluations work, and received feedback from others on my own experiments.

### Learning and Context Building

Key conversations included:

  • Technical infrastructure approaches at NDIF (National Deep Inference Fabric) and AE Studio for interpretability research.

  • Hardware-based AI safety mechanisms using trusted computing and attestation (cryptographic verification of what code is running).

  • Policy pathways through state legislatures for technology governance.

  • Organizational strategy at LISA (London Initiative for Safe AI) and their current priorities.

  • Approaches to scaling AI safety workshops and outreach to broader audiences.

  • AI security practices at major financial institutions.

  • Compassion in Machine Learning's approach to synthetic data for model alignment.

### Coordination and Support

Beyond research conversations, I was able to help several attendees:

  • Connected an AMD ML intern interested in low-level performance improvements to an engineer based in NYC who works at Thinking Machines (and previously founded a $1B startup).

  • Connected a design consultant with $100k+ budget for funding video projects in x-risk related cause areas to relevant creators.

  • Did leetcode interview preparation with a student before their upcoming technical interview at a hedge fund. They'd only done solo practice before and this was their first time doing a mock interview with a partner. They messaged me later to let me know that their interview went well.

  • Connected an undergraduate new to EA with SPAR policy mentors.

  • Encouraged two early career researchers who had been doing interpretability work to apply to ARENA. Also discussed future content for ARENA with a member of their team.

  • Discussed pair programming follow-up with an Anthropic safety fellow.

  • Set up meetings after the conference with some MATS scholars.

## Impact

The conference delivered on my three main goals: learning about current technical and strategic approaches to AI safety, building connections with researchers and practitioners, and improving coordination across different parts of the ecosystem. The grant made this possible when EA Global couldn't fund my travel expenses.

I'd encourage others to apply for similar travel support - the value of in-person conversations at these conferences is substantial, and the application process was straightforward.

🧡
TransformerLens - Bridge Funding
sheikheddy avatar

Sheikh Abdur Raheem Ali

3 months ago

Many new projects still use https://github.com/TransformerLensOrg/TransformerLens as a core dependency. Over 500 public code repositories on Github rely upon the transformer-lens package, including ones created by leading organizations such as Meta Research, Redwood Research, Model Evaluation & Threat Research, Apollo Research, and Decode Research.

Bryce has a proven track record of consistent contributions to the library and is the best possible owner for ensuring the stability and growth of TransformerLens moving forward (as well as compatibility across other packages for doing mechinterp). He brings a deep expert understanding of the existing feature set and has demonstrated rapid iteration speed and skill to adapt the framework for recently launched models.

He also actively volunteers to answer community questions on Slack and integrates user feedback into the development roadmap with detailed progress updates to relevant stakeholders. I would also be excited to see Fabian's contributions going into TransformerLens 3.0 and beyond. I'd endorse and highly recommend donating to this manifund project to help it reach its full funding goal through the rest of 2025.

Ozempic for Sleep: Research for Safely Reducing Sleep Needs
sheikheddy avatar

Sheikh Abdur Raheem Ali

7 months ago

I would suggest reaching out to Andrej Karpathy: https://karpathy.bearblog.dev/finding-the-best-sleep-tracker/

Act I: Exploring emergent behavior from multi-AI, multi-human interaction
sheikheddy avatar

Sheikh Abdur Raheem Ali

about 1 year ago

typo:

payments go directly directly -> payments go directly

Act I: Exploring emergent behavior from multi-AI, multi-human interaction
sheikheddy avatar

Sheikh Abdur Raheem Ali

about 1 year ago

I am glad that I found this before the deadline to contribute.

Interpretable Forecasting with Transformers
sheikheddy avatar

Sheikh Abdur Raheem Ali

about 2 years ago

That's fine.

Interpretable Forecasting with Transformers
sheikheddy avatar

Sheikh Abdur Raheem Ali

about 2 years ago

@ScottAlexander Is it possible to hold on to my current shares? Not interested in selling at the moment.

Interpretable Forecasting with Transformers
sheikheddy avatar

Sheikh Abdur Raheem Ali

about 2 years ago

How much money have you spent so far?

  • It’s hard to calculate this but I’d claim it’s about USD 10k. More if you include opportunity costs. I can provide a breakdown of this budget upon request.

Have you gotten more funding from other sources?

  • Yes. Janus has provided OpenAI API credits and has reimbursed some of my other expenses. Nuño has been consulting. For the rest, I’ve drawn from savings by selling RSUs. 

How is the project going?

  • Got accepted to SPAR under Rubi Hudson, so this project is merging with Avoiding Incentives for Performative Prediction in AI | Manifund

  • Plan to continue working on this agenda from Jan to Apr 2024, sent an application to AI Safety Camp

  • Ran some basic experiments but bottlenecked on conceptual progress. Some false starts, no publishable artifacts so far, but working on it. Please get in touch directly if you'd like to hear more.

How well has your project gone compared to where you expected it to be? (Score from 1-10, 10 = Better than expected)

  • 3.3

Are there any remaining ways you need help, besides more funding?

  • A magic wand that reduces bureaucratic inefficiency.

Any other thoughts or feedback?

  •  Not for now!

Avoiding Incentives for Performative Prediction in AI
sheikheddy avatar

Sheikh Abdur Raheem Ali

about 2 years ago

I believe this project is so promising that I applied to SPAR to volunteer to help directly.

Interpretable Forecasting with Transformers
sheikheddy avatar

Sheikh Abdur Raheem Ali

over 2 years ago

Briefly: Got access to the base model of GPT-4, trying to explore why it’s better calibrated than the instruction fine-tuned RLHF version. Also in DMs with the CEO of Lambda Labs to discuss renting H100s. I’ll fly out to Berkeley from July 10th to Sep 7 if I get a U.S visa. Collaborating with the Cyborgism stream. I’m also transferring teams to work on Bing Chat and am trying to get researcher access to GPT-4’s vision module.

Primary expense at this stage is the cost of our time. More investment would be a signal that this work is valuable, which would make it easier to prioritize over alternative projects.

Further progress is not blocked on funding, but would accelerate it, although I can’t claim to know what the precise relationship is there.

We would likely spend the money to free up more focus time.

Interpretable Forecasting with Transformers
sheikheddy avatar

Sheikh Abdur Raheem Ali

over 2 years ago

Updates:

• The Autocast Competition (mlsafety.org) was closed due to the FTX collapse, so we decided to scrap the paper and reorient towards eventually selling the project to Anthropic instead.

• No outputs on the development side in the last two weeks because I needed a break after pushing to wrap up work prior to my vacation and continuous exhaustion isn't sustainable.

• Applied to SERI MATS to get more time to work on this, got an informal accept from the mentor we targeted, but waiting for official decisions to be out.

Interpretable Forecasting with Transformers
sheikheddy avatar

Sheikh Abdur Raheem Ali

over 2 years ago

@Austin thanks! Quick answers:


Deliverables: We'll open source our methods, code, models, data, animations, and any additional information needed to reproduce the experimental results. We aim to submit a paper to NeurIPS 2023 within the next 8-9 weeks. Public release date is currently 14 weeks from now.

Commitment: I am taking 4 weeks off (starting late April) to focus primarily on this project. As far as when to scale: it's hard to give a firm date since the field moves so fast, but this is really a function of how much we raise. Some parts of our architecture are scale invariant, others plug into publicly available LLMs, and some components of the system are traditional software. On the margin, dollars spent on inference and evaluation (for e.g ablation studies/prompt testing) are more useful than dollars spent on training, at least until you get pretty far down the list of ideas. We'll make the decision to scale when we think it's a good idea, and we don't yet know precisely when that will be.

Interpretable Forecasting with Transformers
sheikheddy avatar

Sheikh Abdur Raheem Ali

over 2 years ago

@aaronl yep! Mostly along the lines of [2206.15474] Forecasting Future World Events with Neural Networks (arxiv.org)

Transactions

ForDateTypeAmount
Manifund Bankabout 15 hours agowithdraw500
<e2a30cdd-6797-4e2c-8823-f051195fc77a>2 days agotip+1
Emergency travel funding to attend EA Global: New York 202514 days agoproject donation+500
PIBBSS - General Programs funding or specific fundingabout 1 year agoproject donation10
Act I: Exploring emergent behavior from multi-AI, multi-human interactionabout 1 year agoproject donation20
Manifund Bankabout 1 year agodeposit+100
Manifund Bankover 2 years agowithdraw540
Interpretable Forecasting with Transformersover 2 years agouser to user trade+40
Interpretable Forecasting with Transformersover 2 years agouser to user trade+500