Genuinely concerned about Super-intelligence replacing Humans, says Paytm CEO Vijay Shekhar

Ten News Network

Galgotias Ad

New Delhi, 11th July 2023: PayTm creator Vijay Shekhar Sharma has expressed concern about the potential disempowerment, if not extinction, of humans as a result of the development of extremely advanced AI systems. He expressed his concerns in a tweet, citing a recent blog article by OpenAI.

Sharma noted some concerning results from the OpenAI post in his tweet. He remarked that he is “genuinely concerned with the power that some people and select countries have already accumulated.”

Sharma emphasised another section of the blog article that stated that, “In less than 7 years we have system that may lead to disempowerment of humanity => even human extinction.”

In the blog article titled “Introducing Superalignment,” OpenAI highlights the necessity for scientific and technological breakthroughs in order to regulate AI systems that potentially outperform human intellect. To solve this issue, OpenAI is allocating significant processing capacity and assembling a team lead by Ilya Sutskever and Jan Leike.

While the development of superintelligence may still seem far off, OpenAI believes it might happen within the next decade. According to the post, managing the risks connected with superintelligence necessitates the creation of new governance institutions as well as the resolution of the difficulty of aligning AI systems with human intent.

Currently, AI alignment solutions rely on human supervision, such as reinforcement learning via human feedback. These strategies, however, may not be sufficient for building superintelligent AI systems that outperform human skills. According to OpenAI, new scientific and technological discoveries are required to overcome this dilemma.

The OpenAI strategy entails developing an automated alignment researcher with nearly human-level intelligence. They intend to scale their efforts and align superintelligence by utilising substantial computational resources. Developing scalable training methods, verifying models, and stress-testing the alignment pipeline are all part of this approach.

Leave A Reply

Your email address will not be published.