Start United States USA — software AI is on the verge of mastering the creative arts

AI is on the verge of mastering the creative arts

190
0
TEILEN

Experts believe AI models will soon rival humans in every creative discipline.
Is there a formula for quality content? The writer in me wants to scoff at the question, but another part is ready to admit there may be such a thing as a mathematically immaculate sentence. Software driven by artificial intelligence (AI) is already being used to craft simple pieces of content (website copy, product descriptions, social media posts etc.) by some companies, saving them the hassle of writing it themselves. But how far does this concept extend? It’s easy to understand how a machine might be taught to follow the strict rules of grammar and construct snippets of text based on information provided. The idea AI might be able to pluck out the most effective word for a specific situation, based on an understanding of the audience, is also within the bounds of our imagination. It is harder, though, to imagine how AI models could be taught the nuances of more complex writing styles and formats. Is a lengthy metafictional novel with a deep pool of characters and a satirical bent a stretch too far, too human? The arrival of synthetic media in the first place, however, was made possible by the availability of immense computing resources and forward strides in the field of AI. Neither area is showing any signs of a plateau, quite the opposite, so it follows that content automation will only grow more sophisticated too. As with any AI product, language models learn to function as desired by first absorbing large quantities of data. By scrutinizing a mass of existing content, the rules of grammar, syntax and proper word selection are learned. Until very recently, however, AI models have been unable to meet the high standards set by human writers, particularly where long-form content is concerned. Mistakes and eccentricities betrayed the non-human author every time. “One of the historical problems with processing very long passages of text is that language models struggle to remember how different parts of the text relate to each other, partly due to something called the ‘vanishing (and exploding) gradient problem’,” explained Jon Howells, Lead Data Scientist at technology services firm Capgemini. “However, AI researchers have been building bigger language models with better techniques, using huge amounts of data and vastly more computational power.” The leading light in this field is a company called OpenAI, which is the creator and custodian of a technology known as GPT (short for Generative Pre-trained Transformer), now in its third generation. In 2018, the company unveiled the first iteration of GPT, which was able to perform natural language processing (NLP) tasks, such as answering questions and analyzing sentiment, thanks to a unique new training method. OpenAI paired unsupervised pre-training, whereby large unlabeled datasets are fed into the model, with supervised learning, which is a process of fine-tuning that uses smaller datasets geared towards solving specific tasks. GPT-3, the latest and most powerful version, expands upon this premise significantly. Built on a record-breaking 175 billion parameters (the number of values the model attempts to optimize), GPT-3 is capable of generating convincing human-like text on-demand, using only a handful of prompts. Since it was released in private beta last year, developers have used the model to generate scripts, songs, press releases, essays and more. With just a few tweaks, one tester found GPT-3 could even be taught to generate HTML code. Blown away by the performance of the language model, Microsoft has since secured an exclusive licensing agreement with OpenAI, which will see the firm integrate GPT-3 into its own products and services. To give you a sense of what it’s capable of, here’s what GPT-3 produced when fed the following prompt: “Here’s a short poem by Dr. Seuss about, elon musk, tesla’s share price, and the SEC.” “Elon Musk tweets a chart, A tweet so bright It’s short-selling experts Their fears ignite, Making numbers gyrate That only hours before Were not. Great Scott! What was this tweet? And where did this chart come from? Well, Tesla’s shares shot up A thousand points, As shorts scrambled to cover, And they lost their shirts!” (The poem continues) Although OpenAI has offered no indication of when GPT-4 might arrive, the second and third generations both landed within a year of their predecessors, suggesting we might not have all that long to wait. In terms of scale, GPT-3 was roughly two orders of magnitude larger than GPT-2. If the same increase is feasible again, GPT-4 could be built on an incredible 17.5 trillion parameters. With greater scale, will come even greater performance. OpenAI has made its technology commercially available via an API, and other rival products (e.g. Google’s BERT) are open source, which means businesses and entrepreneurs can use the models as a foundation for their own AI content services. Jasmine Wang, a researcher that worked on GPT-2 at OpenAI, is one such entrepreneur. Her latest venture, Copysmith, gives clients the tools to generate marketing and ad copy using just four pieces of information: company name and description, target audience and keywords.

Continue reading...