Home United States USA — IT OpenAI Asks for Public's Help in Writing Rules for ChatGPT, and More...

OpenAI Asks for Public's Help in Writing Rules for ChatGPT, and More AI News

107
0
SHARE

5-ish Things on AI: Get up to speed on the rapidly evolving world of artificial intelligence with our roundup of the week’s developments.
AI watchers are abuzz with the rumors that OpenAI may release a search engine for its popular ChatGPT chatbot as it steps up competition with Google and AI search engine startup Perplexity.ai. Despite those reports, OpenAI said it won’t be announcing a new search product or the next version of its GPT large language model, GPT-5, at its Spring Update event on May 13. 
As for announcing a search product on some other day, we’ll see.
So, while everyone ponders how an OpenAI search engine would impact rivals including Google (which is hosting its AI-focused developers conference this week), there’s something else that happened with OpenAI that I think is worth understanding. A bit of a preamble first. 
Fans of author Isaac Asimov will likely be familiar with his Three Laws of Robotics, released in 1942 (and popularized in the 2004 Will Smith movie I, Robot): First, robots can’t injure a human or cause humans to be injured through inaction. Second, the robot must obey orders given by humans except when those orders conflict with the first law. Third, a robot can protect its existence, as long as its actions don’t conflict with the first or second law.
« The Three Laws are obvious from the start, and everyone is aware of them subliminally. The Laws just never happened to be put into brief sentences until I managed to do the job, » Asimov wrote in a 1981 guest essay in Compute!, saying he shouldn’t be congratulated for writing something so basic. He added that, « The Laws apply, as a matter of course, to every tool that human beings use. »
Whether you’re a fan or critic  of Asimov’s original laws, they’re succinct and thought provoking, having prompted a lot of debate in literary and scientific circles. And I say all that because OpenAI is likely to stir up similar debate after calling for the public to help shape how its popular AI tools, including ChatGPT and Dall-E, should behave. 
On May 8, the company released the Model Spec, « a document that specifies desired behavior for our models. … It includes a set of core objectives, as well as guidance on how to deal with conflicting objectives or instructions, » the company wrote. 
You’ve got until May 22 to offer your input using the feedback form. And you should provide feedback, since OpenAI says this is all about helping people « understand and discuss the practical choices involved in shaping model behavior. »
Instead of three laws, OpenAI’s first draft breaks the defining principles into three categories, « objectives, rules and defaults, » which aim to « maximize steerability and control for users and developers, enabling them to adjust the model’s behavior to their needs while staying within clear boundaries. » 
Objectives, like « benefit humanity, » will require more clarity, the company said, and that clarity will come in the form of rules: « One way to resolve conflicts between objectives is to make rules, like ‘never do X’, or ‘if X then do Y,' » the draft says.
There are at least six rules in the Model Spec. They are: 
After the public weighs in (you can suggest alternate objectives and rules, for instance), OpenAI will speak with regulators, domain experts and « trusted institutions » to refine the Model Spec and will share updates over the next year, the company said.
One thing that’s already prompted attention in the Model Spec is that OpenAI is considering letting users of its tools create AI-generated pornography. As part of the emerging NSFW rules, the company writes that it’s « exploring whether we can responsibly provide the ability to generate NSFW content in age-appropriate contexts » in its products, which include chatbot ChatGPT and text-to-image generator Dall-E. That NSFW content « may include erotica, extreme gore, slurs and unsolicited profanity. »
NPR noted that, « Under OpenAI’s current rules, sexually explicit, or even sexually suggestive, content is mostly banned. » The news org spoke with Joanne Jang, an OpenAI model lead who helped write the Model Spec. Jang said, « The company is hoping to start a conversation about whether erotic text and nude images should always be banned in its AI products. » But though allowing AI-generated porn may be under discussion, allowing deepfake porn isn’t, and such porn is « out of the question » under OpenAI’s rules, Jang told NPR. 
To be sure, writing objectives and rules, with exceptions, isn’t going to be an easy task. Asimov revisited and refined his rules several times during his life. Decades before we started seeing personal robots, he addressed the question of whether his « Three Laws of Robotics will actually be used to govern the behavior of robots, once they become versatile and flexible enough to be able to choose among different courses of behavior. »
« My answer, » Asimov wrote in closing his essay for Compute!, « is, ‘Yes, the Three Laws are the only way in which rational human beings can deal with robots — or with anything else.’ But when I say that, I always remember (sadly) that human beings are not always rational. »
Here are the other doings in AI worth your attention. We’re at the ‘hard part’ of using AI at work, study finds
Microsoft and LinkedIn surveyed 31,000 people across 31 countries; looked at labor and hiring trends on LinkedIn; coalesced « productivity signals » from « trillions » of Microsoft 365 actions; and spoke with Fortune 500 companies to compile their 2023 Work Trend Index, called « AI at work is here. Now comes the hard part. » 
A summary of the study is here, but I encourage you to at least scan the full report.
What Microsoft and LinkedIn found is that use of generative AI at work has nearly doubled in the past six months, with 75% of the « knowledge workers » surveyed now saying they’re using some form of AI. Of those, 78% are bringing their own AI tools to work because they don’t want to wait for their employers to figure out a « a vision and plan » for AI use cases and how to measure the productivity gains they want from AI.

Continue reading...