Домой United States USA — IT What ChatGPT’s latest updates mean for business users

What ChatGPT’s latest updates mean for business users

134
0
ПОДЕЛИТЬСЯ

Improved accessibility and up-to-date information could widen the enterprise appeal of ChatGPT as it nears its first birthday
ChatGPT can now access live information on the internet, answer queries based on image content, and interface with users via speech, after a busy week of updates from OpenAI.
Those on a ChatGPT Plus or Business subscription are now able to draw on live data from the internet, with a link to sources, through an integration with Bing search. 
This will enable users to produce more relevant and informed responses and put an end to ChatGPT being limited to data from 2021.
In addition to its expanded search capabilities, the chatbot is also being given wider access to the multimodal capabilities of GPT-4, the large language model (LLM) that powers its paid tier, to accept image inputs.
The chatbot can also now process user speech using OpenAI’s open source speech recognition model Whisper.
Research is a clear use case for the new search features, with workers now able to draw together facts from across the internet via the central dashboard of ChatGPT. 
For example, users could use ChatGPT to summarize information on a rival business’ financial performance based on their publicly available earnings reports, or to aggregate product reviews for a trending product from across a variety of websites.
“Browsing is particularly useful for tasks that require up-to-date information, such as helping you with technical research, trying to choose a bike, or planning a vacation,” OpenAI stated on X (formerly Twitter).
With its new capabilities, ChatGPT can compete more directly with the likes of Bing Chat or Google Bard. Direct comparison between Bing’s AI search option could be an area of particular interest, as both make heavy use of GPT-4 for generative AI search.
GPT-4 was billed as a multimodal model from its announcement, with the capability to process both text and image inputs. In tests, the model provided real-time guidance to a sight-impaired person via the app Be My Eyes, in place of a human volunteer.
OpenAI stated that its development of the new ChatGPT image input has been shaped by its collaboration with Be My Eyes, including feedback on how to make the service most useful for sight-impaired users.
IT users could now make use of these capabilities within ChatGPT to turn flowcharts into working code or troubleshoot flaws present in a circuit diagram. They could also pass in screenshots of a PDF document to produce a summary of its content, or have the service expand on a hand-written note.
Smaller businesses in particular may benefit from image processing capabilities being embedded directly within ChatGPT, rather than having to call GPT-4 via the OpenAI API and develop their own app with which it can interface.

Continue reading...