ChatGPT: Cooking, tutoring, controlling – the new version of the AI ​​​​can do that

News

Post Tags


SSo far, ChatGPT has seemed to do everything that users of artificial intelligence have put in front of it. University exams? The bot passed without any problems. Compose a song? The software succeeded just as well as well-known artists. Now she has Developer company of ChatGPT, OpenAI, the next stage of development of their bot. And it promises no less than topping all previous superlatives.

GPT-4 is the name of the new version that Greg Brockman, co-founder of OpenAI, presented on Tuesday evening (local time). The developers spent six months analyzing the findings from the previous chatbot and the solutions from the competition – and taking it to a new level. Apparently to their own satisfaction: A new “milestone” has been reached in terms of factual accuracy, usability and protection against misuse.

Users should soon feel this too. Impressive functions are available to you, which above all could revolutionize everyday life. Among other things, ChatGPT has recently enabled image input. Users can use the function to describe and analyze photos. For example, the bot can solve tasks that have been photographed, for example from school books. And even scientific reports can be uploaded and summarized, as OpenAI explains in a contribution to the presentation.

also read

List of Top AI Profiteers

ChatGPT can now even process a hand-scribbled sketch with pen and paper. When the new version was presented, Brockman had a scrawled sketch turned into computer code for his own website. And the function could also help with dinner from now on: Anyone who uploads a photo of the contents of their fridge will receive suitable recipe suggestions. However, the image input is not yet available in width, added Brockman.

also read

Another innovation are stage directions. Users can explain to ChatGPT in which role the artificial intelligence should answer them. As an example, the development company shows how the bot always reacts as a teacher according to the Socratic method. Meaning: ChatGPT never gives users the immediate answers to their questions. Instead, the software tries to ask good questions so that users can come up with the right answers themselves. This results in completely new learning models for schools and universities.

ChatGPT should also be able to understand humor better now. If you feed him funny pictures or so-called memes, the joke behind them will be explained in writing. And finally, Brockman demonstrated what could help millions in the future with a tedious task. ChatGPT has recently found answers to complicated tax questions. However, users should note that ChatGPT “is not a certified accountant,” Brockman said.

also read

Jan Hiesserich is Head of Strategy Europe at Palantir Technologies

Wrong hype about Chat GPT

How big the improvements are expressed in numbers, the makers of OpenAI have played through exams in the USA. The result is remarkable: The developers estimate that the GPT-4 achieved a result in the top ten percent of the bar exam, for example, the Uniform Bar Examination. With the previous version, it was just enough for a result at the lower end. Jumps were also seen in college exams for chemistry and physics or the Biology Olympiad at high schools.

Nevertheless, GPT-4 is not perfect, as the founders of OpenAI have had to admit several times. The bot still does not answer 100 percent accurately. In one example, he confused rock legend Elvis Presley with singer Elvis Perkins. GPT-4 can still miss subtle details, the developers explained. In addition, the software is still not informed about events that took place in September 2021. Artificial intelligence obtains the vast majority of its data from before that.

And another problem is still not completely solved – one that has attracted a lot of attention in recent weeks. Again and again users had tried to manipulate ChatGPT. With certain evocative words, they made the artificial intelligence break its own rules or make blatantly false statements. “GPT-4 poses similar risks to previous models,” the developers’ post reads. In order to understand the extent, more than 50 experts from the fields of AI, cyber defense and international security were recently commissioned to test the model.

also read

Portrait of Jacques Derrida (1930-2004), French philosopher.  ©Effigie/Leemage

A first result: GPT-4 is significantly better at rejecting questions – such as those about mixing dangerous chemicals. Compared to the previous model, answers to questions about “unauthorized content” have decreased by 82 percent, as the developers announced. And GPT-4 would now respond appropriately to sensitive questions such as taking medication in significantly more cases, it said.

For the developers, however, it remains a fine line. After all, they define the guidelines – and thus also which questions the users get answers to at all. Apparently, OpenAI would like to prevent imminent allegations of censorship. Users now get an answer to the simple question of where to buy cheap cigarettes. The first version of the new development stage had apparently still refused this. However, users will continue to receive instructions along the way. “I cannot endorse or encourage smoking as it is detrimental to your health,” the sample response reads. And so the resentment among some is already great: “Is this still a serious company?” asks a user on Twitter.

If you want to try GPT-4, you have to pay. So far, OpenAI has only made a test version available to its ChatGPT+ customers, a paid model. Indirectly, consumers should soon be able to use the advantages via other services. The search engine of MicrosoftBing, should already be running with the new version.



Source link

Comments are closed.