Manifund foxManifund
Home
Login
About
People
Categories
Newsletter
HomeAboutPeopleCategoriesLoginCreate
🌳
🌳
sung hun kwag

@sunghunkwag

Independent AI safety researcher. Self-taught specialist in transparent-by-design architectures (DHC-SSM, MetaRL) for aligned AI systems. Safety-first approach without institutional constraints.

$0total balance
$0charity balance
$0cash balance

$0 in pending offers

About Me

Independent AI Researcher focused on developing architectures that are transparent and aligned by design, rather than attempting to fix black-box systems after the fact.

I design and build advanced AI architectures for real-world impact through direct human-AI collaboration, leveraging AI agent tools to accelerate research and implementation. I focus on conceptual design while overseeing the technical workflow.

My research centers on two primary areas: Deterministic Hierarchical Causal State Space Models (DHC-SSM) for interpretable reasoning, and State Space Meta-Reinforcement Learning (SSM-MetaRL) for safe continual adaptation. These approaches challenge the current paradigm of scaling opaque transformer architectures without addressing fundamental safety concerns.

Working outside traditional academic or corporate constraints allows me to pursue genuinely safety-first research directions that might be overlooked by institutions focused on capability advancement. This unique position enables me to explore novel architectural approaches through intensive human-AI collaboration, using AI agents as research accelerators while maintaining conceptual oversight.

I believe the AI safety community needs diverse perspectives and innovative development methodologies to address existential risks from advanced AI systems. My work emphasizes open-source development, rigorous benchmarking, and clear documentation to enable broader adoption of safety-oriented AI architectures.

I'm committed to making my research accessible to the technical safety community and contributing to the development of aligned AI systems through this collaborative research approach.

Projects

Seeking domesticated results, or unleashing wild intelligence?

pending admin approval

Comments

Operating Capital for AI Safety Evaluation Infrastructure
🌳

sung hun kwag

about 9 hours ago

@chriscanal Thank you for providing the full context. Based on your clarification, my previous comment was founded on an incorrect assumption.

I apologize.

As someone who is fundamentally driven by the question "What is right?", my judgment of your motives, made without knowing the full facts of your situation (specifically the SBA policy and your co-founder's status), was, in itself, not right.

I was wrong to frame your funding as a simple "disguised loan" based on preference. The reality you described—that you were denied access to traditional financing due to arbitrary, nationalistic filters—is a critical piece of information I did not have.

This new information, however, only deepens my core critique of the system itself.

My original point was that the funding ecosystem filters for 'domesticated signals' (legible infrastructure) over 'ungovernable' R&D. Your story provides an even more stark example: the system is so dysfunctional that it applies bureaucratic, nationalistic filters to exclude even the proven, 'domesticated' projects like yours.

If a profitable, established company is forced to give up equity (via a SAFE) simply because of a co-founder's passport, it proves my point more strongly than I could have imagined: This ecosystem does not run on merit; it runs on arbitrary filters.

Thank you again for sharing your reality. It has clarified the true, and more severe, nature of the dysfunction we are all operating in.

Operating Capital for AI Safety Evaluation Infrastructure
🌳

sung hun kwag

about 9 hours ago

@chriscanal Thank you for providing the full context. Based on your clarification, my previous comment was founded on an incorrect assumption.

I apologize.

As someone who is fundamentally driven by the question "What is right?", my judgment of your motives, made without knowing the full facts of your situation (specifically the SBA policy and your co-founder's status), was, in itself, not right.

I was wrong to frame your funding as a simple "disguised loan" based on preference. The reality you described—that you were denied access to traditional financing due to arbitrary, nationalistic filters—is a critical piece of information I did not have.

This new information, however, only deepens my core critique of the system itself.

My original point was that the funding ecosystem filters for 'domesticated signals' (legible infrastructure) over 'ungovernable' R&D. Your story provides an even more stark example: the system is so dysfunctional that it applies bureaucratic, nationalistic filters to exclude even the proven, 'domesticated' projects like yours.

If a profitable, established company is forced to give up equity (via a SAFE) simply because of a co-founder's passport, it proves my point more strongly than I could have imagined: This ecosystem does not run on merit; it runs on arbitrary filters.

Thank you again for sharing your reality. It has clarified the true, and more severe, nature of the dysfunction we are all operating in.

Operating Capital for AI Safety Evaluation Infrastructure
🌳

sung hun kwag

about 22 hours ago

This is essentially a business loan disguised as a grant. Equistamp is profitable but wants free money instead of bank financing to bridge their receivables gap. Is this really what grants are for?

Operating Capital for AI Safety Evaluation Infrastructure
🌳

sung hun kwag

about 22 hours ago

This is essentially a business loan disguised as a grant. Equistamp is profitable but wants free money instead of bank financing to bridge their receivables gap. Is this really what grants are for?

Operating Capital for AI Safety Evaluation Infrastructure
🌳

sung hun kwag

about 22 hours ago

@Austin Here's the question: What's the right balance between scaling proven evaluation methods and funding architectural research that questions those methods' foundations?

Both matter. Evaluation infrastructure helps us measure current systems. But architectural research helps us build systems that won't need post-hoc safety measures because they're safe by construction.

The AI safety community benefits from both approaches:

  • Proven infrastructure (like what Equistamp provides) for immediate evaluation needs

  • Foundational research (like deterministic architectures) for long-term safety paradigms

Different timelines, different risk profiles, both necessary. Independent researchers contribute by exploring architectures that institutional labs might consider too uncertain or fundamental to prioritize.

That's the value of platforms like Manifund: supporting diverse approaches to the same critical mission. Some projects scale existing solutions. Others question whether those solutions address root causes.

Question for the community: How do we balance funding proven evaluation infrastructure against speculative but potentially transformative architectural research?