5 Comments
User's avatar
Kris Lyser's avatar

It's somewhat contradictory pivoting from deep mapping - assuming hardware will indeed teach us about software - to a distillation compromise. Moreover, acquiring the connectome might be very much less than "only half the battle", if one looks beyond the cellular level. If I'm right in synthesizing current neurobiology and concluding LLMs are really but an "externalized cerebellum", then your mission statement truly is a compilation of multiple mismatches.

Yoyo is defocusing ultrasound's avatar

I see the alignment framing, but it seems like too many steps motivate fundamental research questions. how is the neuroscience funding landscape in general, compared to neuroscience for alignment, such as people studying critical questions--the redundancy of neurons, mathematical frameworks for neuroscience, etc

Dario Ringach's avatar

Maybe I am missing the point. It seems to me AI is a tool. Just like any other tool, it can be used for good or evil. How exactly the tool is developed has little consequence for how it will be used. Consider the recent dispute between Anthropic and the Pentagon.

SaberToaster's avatar

IMO from that perspective, even human can be seen as tool, no? During the entire history of humankind, there're numerous events that wrecked havoc and in contrary, many philanthropy activities were employed also.

Naveen Rao's avatar

Thought provoking frame. Does this infer that “values” have behavioral (action based) neural precedents, not just lingual/word output origins?