If the solution to AI alignment involves enhancing human minds and/or society, how will this be done?
14
Ṁ2kṀ2.9k2100
34%
Germline engineering
33%
Brain emulation
33%
Signaling molecules for creative brains
25%
Adult brain gene editing
24%
Massive cerebral prosthetic connectivity
18%
Human / human interface
16%
Social software for thinking
8%
External support for thinking
8%
Mental software for thinking
The list of methods I take from TsviBT's Overview of strong human intelligence amplification methods.
This question is managed and resolved by Manifold.
Market context
Get
1,000 to start trading!
Sort by:
What's the resolution criteria? I could believe that many of these will be useful, for example I definitely see it heavily relying on external support for thinking as listed in the article (printing press, text editor, search engine, typewriter) and technologies beyond that, obviously mental and social software as well.
People are also trading
Related questions
If a huge alignment effort is part of the reason for AI having an okay outcome, will it involve a new AI paradigm?
58% chance
Conditional on their being no AI takeoff before 2050, will the majority of AI researchers believe that AI alignment is solved?
51% chance
Will the 1st AGI solve AI Alignment and build an ASI which is aligned with its goals?
17% chance
If ASI is created and doesn't wipe out humanity, will it torture any human-level-intelligences within a year?
24% chance
Will an AI built to solve alignment wipe out humanity by 2100?
12% chance
Will ASI (if it exist and doesn’t wipe out humanity) care about people’s social relationships?
55% chance
Will the solution to the AI alignment problem involve making ALL of Isaak Freeman's dreams/wishlists COME TRUE?
37% chance
Will ASI (if it is created and also does not wipe out humanity) care about Morphological freedom?
49% chance
Will the solution to the AI alignment problem involve making ALL of Jose Luis Ricon's dreams COME TRUE?
19% chance
Conditional on AI alignment being solved, will governments or other entities be capable of enforcing use of aligned AIs?
37% chance