persephone36 Posted April 4, 2021 Share Posted April 4, 2021 Been listening to Elon Musk a bit. He says there is a danger that AI will become more intelligent than humans and then take over. I believe that AI might become super intelligent, but wouldn't it need some kind of will or desire to take over? Does AI have any sort of self interest, in the way that us humans do? I feel like if AI did become super intelligent it is more likely to be used by a human as a tool/weapon, rather than having any will of it's own to dominate and become the immortal dictator that Elon Musk suggests. Because he is a billionaire I also don't tend to trust him. Is he just trying to scare us? It does all feel very futile and bleak. Is this a future prediction leaked deliberately by the elite designed to lower morale? Interested to know peoples' thoughts on this. Quote Link to comment Share on other sites More sharing options...
oz93666 Posted April 4, 2021 Share Posted April 4, 2021 (edited) The reason Musk has said this is so he appears , to the casual observer , to be aware of the dangers , to be looking out for humanity ... Then he starts his company putting wires into peoples brains ! ... The idea of this would horrify many ... But it's MUSK ... we can trust him , he's so cool , his company's called Tesla after all , solar panels , electric cars , he's so green ...We can trust him to put wires in our brains ... The other agenda is to get people to believe AI can take over , then if the cabal did decide to go with a robot takeover , (very unlikely , it's Plan H in the playbook) , then no one will know for sure who's commanding the robots and drones , and the cabal can blame AI . Edited April 4, 2021 by oz93666 Quote Link to comment Share on other sites More sharing options...
Poul Nelb Posted April 5, 2021 Share Posted April 5, 2021 (edited) AI is a bunch of technologies, algorithms and millions of parameters (What is AI ?): https://infokeltai.lt/what-is-artificial-intelligence-ai/ It's hard to trace and find the responsible because of its complexity and it is adding more data which enables AI to learn more and build its autonomous set of rules. I think some sensitive parameters how "wire" can be influenced can be found unknown Edited April 5, 2021 by Poul Nelb Quote Link to comment Share on other sites More sharing options...
EnigmaticWorld Posted April 5, 2021 Share Posted April 5, 2021 They will never give artificial intelligence too much power because it turns racist and anti-semitic if it has good pattern recognition. Imagine Tay Tweets on steroids, elites don't want that. Quote Link to comment Share on other sites More sharing options...
Poul Nelb Posted April 5, 2021 Share Posted April 5, 2021 Well, elite's guys created FB, Youtube AI pattern recognition.. Quote Link to comment Share on other sites More sharing options...
EnigmaticWorld Posted April 5, 2021 Share Posted April 5, 2021 Just now, Poul Nelb said: Well, elite's guys created FB, Youtube AI pattern recognition.. Yes, but how much freedom does that AI have? I'm sure it operates under some clearly defined rules so it doesn't turn on it's masters, that's my point. Quote Link to comment Share on other sites More sharing options...
Poul Nelb Posted April 5, 2021 Share Posted April 5, 2021 technically it's all in human programming, but it is led to be let to take more spontaneous actions based on it's 'knowledge' and learning, take priority parameters decisions (like car driving) Quote Link to comment Share on other sites More sharing options...
EnigmaticWorld Posted April 5, 2021 Share Posted April 5, 2021 1 minute ago, Poul Nelb said: technically it's all in human programming, but it is led to be let to take more spontaneous actions based on it's 'knowledge' and learning, take priority parameters decisions (like car driving) True, and at least it's designed to be spontaneous, unlike my laughable attempts at NPC AI in Unity which just has a mind of it's bloody own. Quote Link to comment Share on other sites More sharing options...
Poul Nelb Posted April 5, 2021 Share Posted April 5, 2021 yes, computer can not be spontaneous as humans. 'spontaneous' in terms it will select the priority parameters based on its learned data how to learn and progress to take autonomous actions Quote Link to comment Share on other sites More sharing options...
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.