top of page
Search

Artificial Intelligence A Potential Risk To Society And Humanity Says Elon Musk

Most CHQ readers are probably aware that earlier this month, Microsoft-backed OpenAI unveiled the fourth iteration of its GPT (Generative Pre-trained Transformer) Artificial Intelligence (AI) program, which has wowed users with its vast range of applications, from

engaging users in human-like conversation to composing songs and summarizing lengthy documents.


In response to the roll out of GPT 4, more than 1,000 thought leaders, including Elon Musk, issued a letter sponsored by the non-profit Future of Life Institute calling for a pause on advanced AI development until shared safety protocols for such designs were developed, implemented and audited by independent experts.


The letter detailed potential risks to society and civilization by human-competitive AI systems in the form of economic and political disruptions and called on developers to work with policymakers on governance and regulatory authorities.

"Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable," the letter said.


Mr. Musk, whose Tesla cars use Artificial Intelligence for their autopilot system, has been vocal about his concerns about AI.


Yahoo Finance reported that further concerns come as EU police force Europol on Monday joined a chorus of ethical and legal concerns over advanced AI like ChatGPT, warning about the potential misuse of the system in phishing attempts, disinformation and cybercrime.


Back in 2019 we voiced our own concerns about how Google might use Artificial Intelligence, and it wasn’t merely for cybercrime or to gain market advantage in the grocery business by integrating AI into shopping apps.


In our article, Google Teaching Artificial Intelligence To Hate Conservatives, we observed that fans of the Terminator series of science fiction action movies would recall that the Terminators were the product of an Artificial Intelligence defense system that turned against humans and decided to wipe out its creators.


But what if the fictional Skynet had been programed so that it only learned to attack white people, or Jews, or Christians, or conservatives? The movie and the results of that dystopian vision of the future would have been a lot different.


But not so different from what Jen Gennai, Head of Responsible Innovation at Google Global Affairs had in mind for Google’s Artificial Intelligence when she told an undercover reporter for Project Veritas:


The reason we launched our A.I. principles is because people were not putting that line in the sand, that they were not saying what's fair and what's equitable so we're like, well we are a big company, we're going to say it…


…my definition of fairness and bias specifically talks about historically marginalized communities. That’s what I care about. Communities who are in power and have traditionally been in power are NOT who I’m solving fairness for. That’s not what I set up for *inaudible* people to address.


Our definition of fairness is one of those things that we thought would be like, obvious, and everyone would agree to. There was, the same people who voted for the current president who do not agree with our definition of fairness.


Jen Gennai is the head of “Responsible Innovation” at Google Global Affairs. She determines policy and ethics for machine learning, or artificial intelligence. What we've learned from the latest report from Project Veritas is that AI is increasingly what Google Search is all about.


And Google’s machines are not learning to be nice to conservatives, Christians, white people and anyone else Google’s “woke” staff decide they don’t like.


You can read more about Ms. Gennai’s ideas about how Artificial Intelligence should work in the UK’s Daily Mail article, “Google's left-wing agenda revealed: Undercover video shows top exec pledging company would 'stop the next Trump situation' and exposes search giant’s secret plan for radical social engineering” wherein she refers to the 2016 election of Donald Trump saying: 'What happened there and how do we prevent it from happening again?'


Project Veritas also received a trove of confidential documents from within Google. One document is about algorithmic unfairness. It reads "for example, imagine that a Google image query for CEOs shows predominantly men… even if it were a factually accurate representation of the world, it would be algorithmic unfairness." Gaurav Gite, a Google software engineer verified the thesis of document.


The brave Google insider who came forward to Project Veritas explained the Google concept of “fairness.”

Google Insider: “What I found at Google related to fairness was a machine learning algorithm called ML Fairness, ML standing for Machine Learning, and fairness meaning whatever it is they want to define as fair. You could actually think of fairness as unfair because it’s taking as input the clicks that people are making and then figuring out which signals are being generated from those clicks, and which signals it wants to amplify and then also dampen.”


Google Insider: “So what they want to do is they want to act as gatekeepers between the user and the content that they’re trying to access. So they’re going to come in and they’re going to filter the content, and they’re going to say, ‘Actually we don’t want to give the user to that information because it’s going to create an outcome that’s undesirable to us’.”


In response to questions posed by Project Veritas founder James O’Keefe the Google insider explained how the search giant, which also owns video platform YouTube, applies the same kind of Leftwing “social justice” standards to censor YouTube videos posted by conservative content creators.


Google Insider: So the way that Google is able to target people is that they take videos, and then they do a transliteration through using artificial intelligence. And they look at the translated text of what those people are saying and then they assign certain categories to them like right winger, or news talk, and then they’re able to take those, and apply their algorithmic re-biasing unfairness algorithms to them so that their content is suppressed across the platform.


Google Insider: So they’re playing narrative control. And what they’re doing it is they’re applying their human, the human component, which is they’re going through – with an army people – and they are manually intervening, and removing your content from, from their servers, and they are saying that the algorithms did it. And in that case for the high profile people, it’s not just ML Fairness that you guys have to worry about, it’s actual people that have their head filled with this SJW mindset, they’re going through and removing the content because it – because they don’t agree with it.


As he wrapped-up the first part of the interview James O’Keefe asked the Google whistleblower is he was afraid and his answer is one we should all be brave and principled enough to follow.

Google Insider: I am afraid. I was more afraid. But, I, I had a lot of difficulty with the concept of, you know, my life ending because of this, but I, I imagine what the other world would look like and it’s not a place I’d want to live in. Hopefully, I get away with it, and nothing bad happens, but bad things can happen. I mean, this is a behemoth, this is a Goliath, I am but a David trying to say that the emperor has no clothes. And, um, being a small little ant I can be crushed, and I am aware of that. But, this is something that is bigger than me, this is something that needs to be said to the American public.


The letter signed by Elon Musk, Steve Wosniak, co-founder of Apple Computers, and over 1000 other thought leaders in computer science, AI and governance called for a pause in AI development that is public and verifiable, and includes all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.


AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts. These protocols should ensure that systems adhering to them are safe beyond a reasonable doubt.[4] This does not mean a pause on AI development in general, merely a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities.


AI research and development should be refocused on making today's powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal.


We agree with Mr. Musk and his colleagues, and have signed the letter with the caveat that the only thing we can think of that would be worse than woke corporations being in charge of AI is for woke governments to be in charge of it.



  • Artificial intelligence

  • Elon Musk

  • OpenAI

  • GPT (Generative Pre-trained Transformer) Artificial Intelligence (AI) program

  • GPT 4

  • safety protocols

  • AI regulation

  • ChatGPT

  • Tesla autopilot

  • Terminator movies

  • Jen Gennai, Head of Responsible Innovation at Google Global Affairs

  • Project Veritas

  • Machine Learning

  • censorship

  • Steve Wosniak

0 views0 comments
bottom of page