As in is it possible for technology such as Artificial Intelligence, FDVR (Full Dive VR), Mind Uploading or Singularity to wipe out humanity where people will go extinct or at the very least cause humanity to abandon the real world for a virtual world?
Not asking whether if technology such as a driverless car has a potential to malfunction and end up killing a person, talking about about humans collectively
Without technology we are dead for sure, so there is only one route.
Elon brings it up to get regulations which is correct. We have regulations on drones now and it didnt take that long to get. Now you arent allowed to do anything with em without someones approval and AI will be the same. You even need a drivers license for drones lol.
The issue is accidents always have to happen before it becomes noticeable for regulation. Foresighting isnt humans strongest side it seems.
On September 20 2021 13:05 Eric15 wrote: Is it possible for technology to kill humanity?
As in is it possible for technology such as Artificial Intelligence, FDVR (Full Dive VR), Mind Uploading or Singularity to wipe out humanity where people will go extinct or at the very least cause humanity to abandon the real world for a virtual world?
Not asking whether if technology such as a driverless car has a potential to malfunction and end up killing a person, talking about about humans collectively
It very much depends on what decisions are made, but is it possible? Yes Maybe not full on extinction, I think some humans will always survive in one way or another, but if AI had access to nukes and could launch without human interference, and was designed really stupidly, then yeah.
I think more realistic is a huge, worldwide disaster costing billions or hundreds of millions of lives. A huge solar flare knocking out power or resetting systems, something dodgy happening like the Earth's magnetic field reversal which will apparently happen at some point, all this stuff would turn our tech against us for a time.
Terminator? Matrix? Resident Evil? COVID-19?? Sure the possibility is real. AI can develop individual consciousness, and then collective consciousness. Humans knowing this will put checks in place. And somewhere somebody breaks the protocol by accident or bad intent and calamity unfolds.
Technology will become a physical part of us eventually, probably replacing inferior biological limitations piece by piece. I'd imagine one day possibly the biological aspect of our existence will be completely obsolete. (I'm talking thousands, possibly millions of years. Humanity is still in it's early infancy timewise).
I do not think AI works the way science fiction portrays it. AI works as a huge data dump on a HUMAN-perceived right-way and its programmers will not include data that would deviate from it otherwise would be faced with poor results.
As of right now it is impossible for AI to take control of nuclear weapons, at least in the US, as we still use floppy discs which can't be hacked. Nukes on submarines have to be manually launched etc. Now one can argue an AI could hack a satellite to send launch confirmation , eh I guess, but humans would also see such a signal and try and prevent it. And why would an AI, still very vulnerable to be destroyed, give itself away like that?
I would flip it around and say, is it possible that technology will not kill humanity? It seems to me to be a matter of when, especially when you consider "technology" broadly and the process of exploiting this entire planet beyond its regenerative limits that we have been engaged in since the industrial revolution and which seems to be unstoppable. The likelihood of some AI-based catastrophe is much lower than the obvious consequences of what we consider to be "normal" human activity over time in a global consumer capitalism system that is, in effect, addicted to growth.
On October 01 2021 06:09 Tossim111 wrote: I do not think AI works the way science fiction portrays it. AI works as a huge data dump on a HUMAN-perceived right-way and its programmers will not include data that would deviate from it otherwise would be faced with poor results.
Yes, but assume there's a deep learning AI with A LOT of power that runs towards a predefined goal like f.e. ordering stuff as quickly as possible. A large scale administration program f.e.. It runs simulations to achieve this goal and every x days updates itself with it's most optimal version.
Now assume the defined goal is that the time of the disordered-state is minimized and let's ignore for now how disordered is defined. One of the tests might end up removing the cause of disorder (f.e. humans) if the program's limitations are poorly defined and simulations would end up favoring an implementation where humans can't cause disorder, since that would reduce the time in the disordered state to one single ordering, which is amazingly effective.
AI doesn't need to be sentient in the way we humans define it to decide that humans are bad for it's goal and introduce steps to remove the them from the equation.
I wholeheartedly believe that technology is designed to help humanity. Otherwise we wouldn't have created them. The main thing is that the technologies for protection are not created in the image and likeness of a person. Otherwise, we will face a militant consciousness with a thirst for destruction, which we probably will not be able to resist. Anyway, as long as I have Ajax systems at home, I feel calm. In any case, so far this system does not have a real AI.
On October 01 2021 06:09 Tossim111 wrote: I do not think AI works the way science fiction portrays it. AI works as a huge data dump on a HUMAN-perceived right-way and its programmers will not include data that would deviate from it otherwise would be faced with poor results.
Yes, but assume there's a deep learning AI with A LOT of power that runs towards a predefined goal like f.e. ordering stuff as quickly as possible. A large scale administration program f.e.. It runs simulations to achieve this goal and every x days updates itself with it's most optimal version.
Now assume the defined goal is that the time of the disordered-state is minimized and let's ignore for now how disordered is defined. One of the tests might end up removing the cause of disorder (f.e. humans) if the program's limitations are poorly defined and simulations would end up favoring an implementation where humans can't cause disorder, since that would reduce the time in the disordered state to one single ordering, which is amazingly effective.
AI doesn't need to be sentient in the way we humans define it to decide that humans are bad for it's goal and introduce steps to remove the them from the equation.
This seems like the logic of "I, Robot" (2004) and other scifi movies, but no, I do not thing that is a concern for in the real world.
Currently, I do see a very real danger protecting our own minds against AI algorithms, though. Our online activities are tracked by Google and others, and in some areas, the algorithms know us better than we know ourselves! In politics, we have already seen how it can have very dangerous consequences, and it can be worth bringing up the question if we really have a free will.
Personally I wouldn't fear AI that much. The environment is actually rather inimical to the computers and current prognoses slate it to becoming much, much worse in the next 10-15 years - most likely to the point where cell phones won't work due to increase in magnetic radiation, we've been lucky that computer revolution happened at the time when our sun's activity was lowest in centuries but it's about to change.
Biggest problems facing humanity currently are: - overfishing and deterioration of seas and oceans (this is big seeing how a lot of human population is dependent on them for most if not all of their food) - inability to deal with technological waste - plastic is forever and there's a lot of it, extremely hard to dispose of and not all of it can be recycled. Also, first generations of solar panels and electric/hubrid car parts are going out of commission and no one really knows what to do with them.
On October 05 2021 18:38 Manit0u wrote: Personally I wouldn't fear AI that much. The environment is actually rather inimical to the computers and current prognoses slate it to becoming much, much worse in the next 10-15 years - most likely to the point where cell phones won't work due to increase in magnetic radiation, we've been lucky that computer revolution happened at the time when our sun's activity was lowest in centuries but it's about to change.
Biggest problems facing humanity currently are: - overfishing and deterioration of seas and oceans (this is big seeing how a lot of human population is dependent on them for most if not all of their food) - inability to deal with technological waste - plastic is forever and there's a lot of it, extremely hard to dispose of and not all of it can be recycled. Also, first generations of solar panels and electric/hubrid car parts are going out of commission and no one really knows what to do with them.
Really? I think the jury is still out to judge if mirco-plastic is really that much of a problem.
Something like a new ice-age would be very hard to deal with, but we would manage somehow, even the worst-cases of warming would be nothing in comparison. A super-volcano making a global ash cloud blocking the sun for years would be the worst imo, and we couldn't do anything to stop it. Fortunately those are rare, maybe a WW3 is more likely.
On October 05 2021 18:38 Manit0u wrote: Personally I wouldn't fear AI that much. The environment is actually rather inimical to the computers and current prognoses slate it to becoming much, much worse in the next 10-15 years - most likely to the point where cell phones won't work due to increase in magnetic radiation, we've been lucky that computer revolution happened at the time when our sun's activity was lowest in centuries but it's about to change.
Biggest problems facing humanity currently are: - overfishing and deterioration of seas and oceans (this is big seeing how a lot of human population is dependent on them for most if not all of their food) - inability to deal with technological waste - plastic is forever and there's a lot of it, extremely hard to dispose of and not all of it can be recycled. Also, first generations of solar panels and electric/hubrid car parts are going out of commission and no one really knows what to do with them.
Really? I think the jury is still out to judge if mirco-plastic is really that much of a problem.
It's not about the micro-plastic (that's just a tiny part of the problem) but simply a huge amount of plastic waste that's being accumulated all the time and not being recycled as you can't dispose of it.
Sights like that will become more common:
Also, just check out Manila's waterways...
I think that Manila alone is dumping like 50 tonnes of plastic into its water ways and then ocean every single day.
And regarding the micro-plastics too, the majority of the plastic island in the Pacific is made up of fishing gear (nets, buoys, lines), not smaller stuff.
Yeah seeing all that plastic is disgusting. Wonder how long until the ocean is toxic enough to not even be able to bathe in it. Eating fish is already risky and no matter where you dump something, it arrives everywhere at some point.
That there's still small products packaged in plastic in shelves tells you how much politicians mean it when they say go green. As if carbon dioxide were a bigger problem than all this ultimately toxic waste.
I agree that those pictures are absolutely disgusting, and keeping plastic away from the oceans is a cause it is very easy to get behind.
I just watched this video explaining what the "plastic islands" really are:
But: the oceans are absolutely enormous, containing 1,37 billion cubic kilometers of water. While plastic waste is certainly one of many environmental problems we cause, I have not yet seen a convincing argument why it is an existential one.