The Amaranth Letters

The Amaranth Letters

Home
Archive
About

Share this post

The Amaranth Letters
The Amaranth Letters
RFI: Neuroscience & the Path to Safe AGI

RFI: Neuroscience & the Path to Safe AGI

Safe Artificial General Intelligence through Neuroscience

Amaranth Foundation's avatar
Amaranth Foundation
Jul 30, 2025
9

Share this post

The Amaranth Letters
The Amaranth Letters
RFI: Neuroscience & the Path to Safe AGI
2
Share
Cross-post from The Amaranth Letters
What can neuroscience teach us about building artificial general intelligence (AGI) that is safe, aligned, and broadly beneficial—especially if frontier‑level systems arrive sooner than many expect? We're looking for your ideas through this RFI. Help us shape next-gen NeuroAI. -
Patrick Mineault

The Amaranth Foundation supports cutting‑edge neuroscience with the belief that insights from the brain can—and should—shape the emergence of artificial general intelligence (AGI) toward equitable, beneficial outcomes. We’ve published this short Request for Information (RFI) to invite the research community to share their ideas around NeuroAI and AI safety. We hope to clarify (1) the broad relevance of neuroscience to AGI and its safety, and (2) strategic directions the field might take if frontier‑scale AGI arrives within ~3 years under today’s large‑model paradigm. We invite you to share this RFI broadly within your network.

1 – Neuroscience ↔ AGI: Open‑Ended Relevance

How can your area of neuroscience meaningfully inform, constrain, or inspire the design and governance of future AGI systems? We welcome perspectives spanning cellular and circuit mechanisms, cognitive and social neuroscience, computational modelling, neuro‑inspired architectures, brain‑data‑driven training regimes, and ethical or societal insights grounded in the study of natural intelligence. You may wish to reflect on themes surfaced in the “NeuroAI for AI Safety” roadmap (Mineault et al., 2024), which outlines eight ways brain science could mitigate key AI‑safety failure modes—from robustness and out‑of‑distribution generalisation to the cultivation of pro‑social agency.

2 — Short‑Timeline Scenario (~2025‑2028)

Now, assume that frontier AI labs achieve highly capable, largely autonomous AI R&D systems by late 2027 (Koktajlo et al. 2025, Aschenbrenner 2024), and that this rapidly leads to AGI. In that world:

What near‑term research and development opportunities exist for the neuroscience community—e.g., data generation, tool development, fundamental theory, interdisciplinary partnerships, etc.—to most effectively shape the safety, alignment, and societal integration of frontier AI systems?

Concrete proposals might touch on rapid‑cycle brain‑data benchmarks for agentic models, causal intervention toolkits inspired by modern neurotech, biologically grounded metrics of value formation, or multi-agent governance frameworks informed by comparative cognition. Feel free to challenge these examples.

Practicalities

  • Response format: ≤ 2 pages (PDF or equivalent), free‑form prose

  • Deadline: 23:59 ET, 29 Aug 2025

  • Submission: neuroaisafety@amaranth.foundation

  • Review: Internal; selected authors may be invited to a virtual salon or to submit full proposals. No material will be shared externally without permission.

For clarifications, email neuroaisafety@amaranth.foundation with the subject “Neuro × AGI RFI Query”.

Thank you for helping map how neuroscience can guide the safe evolution of artificial general intelligence.

9

Share this post

The Amaranth Letters
The Amaranth Letters
RFI: Neuroscience & the Path to Safe AGI
2
Share

No posts

© 2025 Amaranth Foundation
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share