Humanity's goals and AI
While the AI technologies are advancing rapidly, we still have world conflicts between nations, corporations, and individuals, where people don’t really know what they truly want. While this does not seem like a major risk today, with the prospects of AI becoming super-intelligent, it is an imperative for humanity to collectively define its goals to guide all optimization systems (the alternative is that the AI will do it for us, and may be just one simple idea away from this happening). We do not yet have a way to collectively define goals together by participation of all life, and we don’t seem to really know what they want yet.
The ideas of the systems to define and align goals are welcome, and suggestions to improve this question.
Public self-explanatory, inter-lingual, financial, programmable, hierarchical think-tank
Prioritize the use of AIs to empower humans to communicate and decide together, and it won't take over the world.