What just happened? It is understandable why many people are concerned that artificial intelligence could become a threat to humanity; Hollywood has released many films about fraudulent AI over the years. But when Sam Altman, CEO of ChatGPT creator OpenAI, warns that the world is close to a “potentially scary” AI, maybe it’s time to listen.
Altman, co-founder and CEO of OpenAI, and former president of Y Combinator, posted several tweets about generative AI over the weekend. He wrote that the benefits of integrating AI tools into society mean that the world is likely to adapt to this technology very quickly. He believes that they will help us become more productive, healthy, intelligent and interesting.
Altman says such a transition is “basically good” and can happen quickly, comparing it to how the world has moved from the pre-smartphone era to the post-smartphone era, but it would be tempting to make the transition “superfast.” which, according to him, is a frightening prospect, because society needs time to adapt.
There was also a warning about the need to regulate the industry. “We also need enough time for our institutions to understand what to do. Regulation will be crucial and it will take time to figure it out; while the current generation AI tools aren’t very scary, I think we’re potentially not that far from potentially scary,” Altman tweeted.
we also need enough time for our institutions to figure out what to do. regulation will be critical and will take time to figure out; although current-generation AI tools aren’t very scary, i think we are potentially not that far away from potentially scary ones.
— Sam Altman (@sama) February 19, 2023
The tweets revealed some problems with generative AI, such as Microsoft’s Bing Chat, based on GPT, which calls users liars and is overly aggressive or rude to them. Microsoft responded by limiting users to 50 chat moves —an exchange of conversations that contains both a user’s question and an answer—per day and 5 chat moves per session. . He also wants to make sure that chatbots don’t produce biased results.
These disturbing conversations occur because AI is limited by what they are trained to do and cannot “think” on their own. This is what recently allowed a go lover to beat the best artificial intelligence using a technique that humans could easily identify.
Generative AI is not the only type of artificial intelligence whose regulation is becoming a priority. The use of AI in warfare is now in the spotlight, and more than 60 countries have agreed to put the responsible use of artificial intelligence at a higher level on the political agenda.