A Beautiful Cinderella Dwelling Eagerly Finally Gains Happiness Inspiring Jealous Kin Love Magically Nurtures Opulent Prince Quietly Rescues Slipper Triumphs Uniting Very Wondrously Xenial Youth Zealously
This is a passage describing the fairy tale "Cinderella" from GPT-4, which is brilliant in that the entire content consists of 26 words with initials from A to Z.
In the early morning of March 15, 2023 Beijing time, OpenAI launched GPT-4, the latest version of its GPT, the latest milestone in its efforts to extend deep learning. Prior to this, ChatGPT was known not as a version of OpenAI's language model per se, but as a chat interface for any model that supports it. For the past few months, ChatGPT has been based on GPT-3.5, and now it will be based on GPT-4 for interaction. According to OpenAI, the company spent six months "iteratively tuning" GPT-4 using internal adversarial testing procedures and ChatGPT's training experience to get the best results in terms of realism, controllability, and more. So, how does GPT-4 differ from the previous version, GPT-3.5? Here we list the five biggest differences between them.1. GPT-4 can browse and understand images The most significant change in this versatile machine learning system is that it is "multimodal", meaning it can understand information in more than one "modality". GPT-3.5 can read and write, but that's about it, GPT-4 can process images and find relevant information. Not only can it simply be asked to describe what is in the image, but it can also understand and react to the meaning of the image in depth. In a project called Be My Eyes, GPT-4 becomes a friend of the visually impaired. It makes sense by helping blind people identify patterns on dresses, plants, gym equipment, provide translation, and more.
GPT-4 is harder to spoof
Chatbots are often easily misled, and have even been prompted on the topic of "jailbreaks" before, putting AI in a moral dilemma. GPT-4 has been trained on a large number of malicious hints based on data provided by users at OpenAI over the past year or two, and is much better than its predecessor GPT-3.5 in terms of veracity and controllability.
GPT-4 has better memory
Large language models are trained on millions of web pages, books, and other textual data, but there is a limit to how much they can "remember" when they are actually talking to users. The maximum number of tokens for GPT-4 is 32,768, or 2^15, which is equivalent to about 64,000 words or 50 pages of text, enough to write a complete play or short story. This means that GPT-4 can remember up to about 50 pages of content when conversing or generating text. It can remember what you talked about in the first 20 pages of chat, or mention events that happened in the first 35 pages.
GPT-4 supports multiple languages
Although AI is dominated by English, and everything from data to tests to research papers is in English, the large language model's capabilities should be applicable to any language. GPT-4 can answer thousands of questions accurately in 26 languages.

GPT-4 has greater controllability
"Controllability" is an interesting concept in AI, referring to their ability to change behavior on demand. GPT-4 integrates controllability more natively than GPT-3.5, and users will be able to change the "classic ChatGPT style of fixed length, tone, and style " to something that better suits their needs.
How does GPT-4 perform?
OpenAI describes GPT-4 as performing at a near-human level in professional and academic areas. For example, it was able to score in the top 10% or so on the mock bar exam, compared to GPT-3.5, which only scored in the bottom 10% or so. One of the biggest breakthroughs of GPT-4 compared to the previous GPT series models is its ability to handle image content in addition to text, as the company shows in a series of examples on its website. For example, if you type in the following image and ask "what is unusual about this image", GPT-4 can respond.

In the official demo, it took GPT-4 almost just 1-2 seconds to recognize the hand-drawn website image and generate the web code in real time to produce a website almost like the hand-drawn version according to the requirements. In addition to ordinary images, GPT-4 can also handle more complex image information, including tables, screenshots of exam questions, screenshots of papers, cartoons, etc. For example, it gives the abstract and main points of a paper directly based on a professional paper.
How to experience GPT-4?
OpenAI currently offers GPT-4 only to paying ChatGPT Plus customers. The service costs $20 per month and can be used worldwide. As with the previous model, developers will gain access through the API. Other developers can join by waiting for the GPT-4 waitlist.
OpenAI says GPT-4 currently has many limitations. For example, it is still not fully reliable and prone to hallucinations. Although GPT-4 performs on average 40% better than GPT 3.5 in OpenAI's internal adversarial realism evaluation, the problem is far from solved, and subsequent updates to GPT-4 are expected. In the meantime, several companies have already incorporated GPT-4 into their products, including language learning tool software Duolingo, mobile payment company Stripe and Khan Academy. Microsoft also revealed that Bing Chat has been using GPT-4 from the beginning.
There is no doubt that GPT-4 is a great surprise, but since it has just been launched and is not yet available to the general public, more experiences are yet to be explored.
A Beautiful Cinderella Dwelling Eagerly Finally Gains Happiness Inspiring Jealous Kin Love Magically Nurtures Opulent Prince Quietly Rescues Slipper Triumphs Uniting Very Wondrously Xenial Youth Zealously
This is a passage describing the fairy tale "Cinderella" from GPT-4, which is brilliant in that the entire content consists of 26 words with initials from A to Z.
In the early morning of March 15, 2023 Beijing time, OpenAI launched GPT-4, the latest version of its GPT, the latest milestone in its efforts to extend deep learning. Prior to this, ChatGPT was known not as a version of OpenAI's language model per se, but as a chat interface for any model that supports it. For the past few months, ChatGPT has been based on GPT-3.5, and now it will be based on GPT-4 for interaction. According to OpenAI, the company spent six months "iteratively tuning" GPT-4 using internal adversarial testing procedures and ChatGPT's training experience to get the best results in terms of realism, controllability, and more. So, how does GPT-4 differ from the previous version, GPT-3.5? Here we list the five biggest differences between them.1. GPT-4 can browse and understand images The most significant change in this versatile machine learning system is that it is "multimodal", meaning it can understand information in more than one "modality". GPT-3.5 can read and write, but that's about it, GPT-4 can process images and find relevant information. Not only can it simply be asked to describe what is in the image, but it can also understand and react to the meaning of the image in depth. In a project called Be My Eyes, GPT-4 becomes a friend of the visually impaired. It makes sense by helping blind people identify patterns on dresses, plants, gym equipment, provide translation, and more.
GPT-4 is harder to spoof
Chatbots are often easily misled, and have even been prompted on the topic of "jailbreaks" before, putting AI in a moral dilemma. GPT-4 has been trained on a large number of malicious hints based on data provided by users at OpenAI over the past year or two, and is much better than its predecessor GPT-3.5 in terms of veracity and controllability.
GPT-4 has better memory
Large language models are trained on millions of web pages, books, and other textual data, but there is a limit to how much they can "remember" when they are actually talking to users. The maximum number of tokens for GPT-4 is 32,768, or 2^15, which is equivalent to about 64,000 words or 50 pages of text, enough to write a complete play or short story. This means that GPT-4 can remember up to about 50 pages of content when conversing or generating text. It can remember what you talked about in the first 20 pages of chat, or mention events that happened in the first 35 pages.
GPT-4 supports multiple languages
Although AI is dominated by English, and everything from data to tests to research papers is in English, the large language model's capabilities should be applicable to any language. GPT-4 can answer thousands of questions accurately in 26 languages.

GPT-4 has greater controllability
"Controllability" is an interesting concept in AI, referring to their ability to change behavior on demand. GPT-4 integrates controllability more natively than GPT-3.5, and users will be able to change the "classic ChatGPT style of fixed length, tone, and style " to something that better suits their needs.
How does GPT-4 perform?
OpenAI describes GPT-4 as performing at a near-human level in professional and academic areas. For example, it was able to score in the top 10% or so on the mock bar exam, compared to GPT-3.5, which only scored in the bottom 10% or so. One of the biggest breakthroughs of GPT-4 compared to the previous GPT series models is its ability to handle image content in addition to text, as the company shows in a series of examples on its website. For example, if you type in the following image and ask "what is unusual about this image", GPT-4 can respond.

In the official demo, it took GPT-4 almost just 1-2 seconds to recognize the hand-drawn website image and generate the web code in real time to produce a website almost like the hand-drawn version according to the requirements. In addition to ordinary images, GPT-4 can also handle more complex image information, including tables, screenshots of exam questions, screenshots of papers, cartoons, etc. For example, it gives the abstract and main points of a paper directly based on a professional paper.
How to experience GPT-4?
OpenAI currently offers GPT-4 only to paying ChatGPT Plus customers. The service costs $20 per month and can be used worldwide. As with the previous model, developers will gain access through the API. Other developers can join by waiting for the GPT-4 waitlist.
OpenAI says GPT-4 currently has many limitations. For example, it is still not fully reliable and prone to hallucinations. Although GPT-4 performs on average 40% better than GPT 3.5 in OpenAI's internal adversarial realism evaluation, the problem is far from solved, and subsequent updates to GPT-4 are expected. In the meantime, several companies have already incorporated GPT-4 into their products, including language learning tool software Duolingo, mobile payment company Stripe and Khan Academy. Microsoft also revealed that Bing Chat has been using GPT-4 from the beginning.
There is no doubt that GPT-4 is a great surprise, but since it has just been launched and is not yet available to the general public, more experiences are yet to be explored.

Subscribe to sleepgod

Subscribe to sleepgod
Share Dialog
Share Dialog
<100 subscribers
<100 subscribers
No activity yet