![]() ![]() Though US companies operating in the EU large market will need to comply. A Super-intelligent AI may predict these Long-terminists may be eaten by “Brontorocs”!Ĭomment: It’s likely the US will follow with some differences to promote US leadership in this area. Including in saying goodbye to the rest of doomed humanity that must be allowed to suffer and die.ĭon’t Look up! □ ⬆️. And those with the power and financial capabilities should take steps to survive at all costs. So rather than oppose snd fix these problems it is better to accept they will happen. Sociological, Econnomic, and Ecosystem, collapse. In part this is about accepting the inevitability of Political. Longtermism is a (pseudo) ethical stance which gives priority to improving the long-term future. And that, by the way, is the position of ”Long-Termism” a radical expectation of many Silicon Valley billionaires! If that’s ignored eventually existential risk is inevitable, and the only need is to prepare for it. The point of the image is saying there’s time to fix things is fine, if and only if there’s recognition of the timely need to put in place existential risk mitigation regulations. Without effective regulation, and given Silicon Valley ethics to “Run Fast and Break Things” future times could be bleak! The code of conduct embodies a company's values, principles, and ideals…” PS: “Google released an updated code of conduct, which for the first time in almost two decades removed almost all mentions of its ‘Don't be evil’ mantra. And if you chose evil means and non good ends occur you must hold responsibility, and thus the consequences! □ ![]() Also, in certain situations, consequentialism can lead to decisions that are objectionable, even though the consequences are arguably good.”Įxpected good ends rarely justify evil means. ![]() Indeed, no one can know the future with certainty. “Consequentialism is sometimes criticized because it can be difficult, or even impossible, to know what the result of an action will be ahead of time. And I wouldn’t trust an AI based on past data making such decisions ether! Humans can’t figure out such utility functions in complex real world dynamics. Beware people with a confident ideology that involves the notion that acceptance of evil now is justified by a supposed/promised good end! The utility function of “consequentialism”. ![]()
0 Comments
Leave a Reply. |
Details
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |