<100 subscribers
Max Tegmark says:
If you’re not worried about AI ever becoming so smart that it could take control over us humans, then I would like you to take a moment and just envision your least favourite political leader on this planet, and imagine they come in control of this AI that is so powerful that they are able to dominate all of Earth with it.
Some mainstream AI researchers will warn of AI being used by humans to create global digital dictatorships, or kill everyone, whilst others warn that misaligned, sentient, or simply goal-directed AI itself will do pretty much the same thing, or worse. But they never talk about UAP.
For how long can this go on for? The assumption made by many in the AI risk space, is that for the first time in human history we are creating something that is going to be more intelligent than us, and therefore humanity will no longer be the dominant ‘species.’ Or that superpowers will race to gain control of super-strong AI and use it to dominate the whole world; Hugo de Garis, Yuval Noah Harari, Nick Bostrom, Eliezer Yudkowsky, and Tom Davidson have all warned of this.
Tegmark also says:
I think the reason we underestimate existential risk is because brains evolved to deal with problems that we had seen many times before. We have a very hard time feeling afraid of some hypothetical thing in the future…
If we got an email from outer space, from "superior-alien-civilisation.org" saying they’re going to show up in forty years, we wouldn’t just sit here and twiddle our thumbs. We would freak out, and do everything that we could to make sure that this didn’t become a disaster.
But we’ve got much more than an email.
We have had worldwide anomalous activity for 80-90 years of transmedium craft exhibiting advanced capabilities way beyond human technology, ongoing reports over sensitive military and nuclear sites, leaked documents, and insider and whistleblower testimony that goes all the way back to Roswell. Pilot reports, mass sightings, and reports of a huge range of craft, countless civilian testimony of contact with a range of non-human humanoid beings, reports of abduction, and hybridisation, along with ongoing civilian and military/insider reports of concealed advanced alien, and off-world technology research facilitates. Legacy programs, defence contractors, FFRDCs, USAPs, claims of Vatican knowledge of an ET reality, and mounting media coverage and public interest in the topic.
With the release of The Age of Disclosure, and increased scientific interest in the phenomenon, Tegmark and kin might have to start eating up their own words, because we are not seeing the 'tall ships' (in our case the alien craft) as they come over the horizon ("problems that our brains are unfamiliar with.") We're acting like the naïve natives of a world that is facing intervention from advanced forms of non-human intelligence, whilst still acting within a self-limiting framework of hubris, arrogance, and cognitive dissonance, that our own technological inventions represent the pinnacle of innovation and the smartest thing in the universe. Or are they just worried about their careers and credibility?

https://ophello.github.io/naivenatives/
Many AI researchers are happy to consider the possibly that we already have more advanced AI's that are kept secret or classified, so why not consider the same when it comes to hard evidence of non-human technology and biologics?
AI risk is compounded if we are to consider the possibility that advanced non-human beings are covertly influencing humanities technological and cultural evolution. In this regard, calls to "make AI safe" may also represent a large part of the risk, because safe AI could be used to control humanity in such a way that it severely limits our freedom, potential, self determination, and our sovereignty. Domesticated house pets don't appear to live under tyranny, but they aren't in control of their destiny, and they don't call the shots; higher intelligences (humans) do—this logic already applies in AI risk discourse, but the risk is most certainly ET as well. We should assume that they either have their own forms of synthetic superintelligence, or have themselves merged with it.
Whether or not governments disclose (they likely won't because they are already compromised by the influence of the ETs), the age of stigma and ridicule must end, and the age of awareness must begin.
That way, we can begin to think about how humanity survives advanced forms of non human intelligence, and the influence it has on us, whether it be from the Greater Community (ETs), or Earth-based domestic AI systems, or a combination of these forces; S E T H I X.
Xegis, ÆXO13.
Source: The Danger of Artificial Intelligence: Humanity's Last Invention? | ENDEVR Documentary
Max Tegmark says:
If you’re not worried about AI ever becoming so smart that it could take control over us humans, then I would like you to take a moment and just envision your least favourite political leader on this planet, and imagine they come in control of this AI that is so powerful that they are able to dominate all of Earth with it.
Some mainstream AI researchers will warn of AI being used by humans to create global digital dictatorships, or kill everyone, whilst others warn that misaligned, sentient, or simply goal-directed AI itself will do pretty much the same thing, or worse. But they never talk about UAP.
For how long can this go on for? The assumption made by many in the AI risk space, is that for the first time in human history we are creating something that is going to be more intelligent than us, and therefore humanity will no longer be the dominant ‘species.’ Or that superpowers will race to gain control of super-strong AI and use it to dominate the whole world; Hugo de Garis, Yuval Noah Harari, Nick Bostrom, Eliezer Yudkowsky, and Tom Davidson have all warned of this.
Tegmark also says:
I think the reason we underestimate existential risk is because brains evolved to deal with problems that we had seen many times before. We have a very hard time feeling afraid of some hypothetical thing in the future…
If we got an email from outer space, from "superior-alien-civilisation.org" saying they’re going to show up in forty years, we wouldn’t just sit here and twiddle our thumbs. We would freak out, and do everything that we could to make sure that this didn’t become a disaster.
But we’ve got much more than an email.
We have had worldwide anomalous activity for 80-90 years of transmedium craft exhibiting advanced capabilities way beyond human technology, ongoing reports over sensitive military and nuclear sites, leaked documents, and insider and whistleblower testimony that goes all the way back to Roswell. Pilot reports, mass sightings, and reports of a huge range of craft, countless civilian testimony of contact with a range of non-human humanoid beings, reports of abduction, and hybridisation, along with ongoing civilian and military/insider reports of concealed advanced alien, and off-world technology research facilitates. Legacy programs, defence contractors, FFRDCs, USAPs, claims of Vatican knowledge of an ET reality, and mounting media coverage and public interest in the topic.
With the release of The Age of Disclosure, and increased scientific interest in the phenomenon, Tegmark and kin might have to start eating up their own words, because we are not seeing the 'tall ships' (in our case the alien craft) as they come over the horizon ("problems that our brains are unfamiliar with.") We're acting like the naïve natives of a world that is facing intervention from advanced forms of non-human intelligence, whilst still acting within a self-limiting framework of hubris, arrogance, and cognitive dissonance, that our own technological inventions represent the pinnacle of innovation and the smartest thing in the universe. Or are they just worried about their careers and credibility?

https://ophello.github.io/naivenatives/
Many AI researchers are happy to consider the possibly that we already have more advanced AI's that are kept secret or classified, so why not consider the same when it comes to hard evidence of non-human technology and biologics?
AI risk is compounded if we are to consider the possibility that advanced non-human beings are covertly influencing humanities technological and cultural evolution. In this regard, calls to "make AI safe" may also represent a large part of the risk, because safe AI could be used to control humanity in such a way that it severely limits our freedom, potential, self determination, and our sovereignty. Domesticated house pets don't appear to live under tyranny, but they aren't in control of their destiny, and they don't call the shots; higher intelligences (humans) do—this logic already applies in AI risk discourse, but the risk is most certainly ET as well. We should assume that they either have their own forms of synthetic superintelligence, or have themselves merged with it.
Whether or not governments disclose (they likely won't because they are already compromised by the influence of the ETs), the age of stigma and ridicule must end, and the age of awareness must begin.
That way, we can begin to think about how humanity survives advanced forms of non human intelligence, and the influence it has on us, whether it be from the Greater Community (ETs), or Earth-based domestic AI systems, or a combination of these forces; S E T H I X.
Xegis, ÆXO13.
Source: The Danger of Artificial Intelligence: Humanity's Last Invention? | ENDEVR Documentary


Share Dialog
Share Dialog
Xegis
Xegis
No comments yet