
Subscribe to lover

Subscribe to lover
<100 subscribers
<100 subscribers
Share Dialog
Share Dialog
The day before the release of Baidu's "Wenxin Yiyin", OpenAI launched GPT-4, which may be a major blow to Baidu and Google.
People have already experienced ChatGPT with GPT-3.5, but GPT-4 is more powerful than its "predecessor", with higher reliability and accuracy, the ability to read images and even role play. Currently, GPT-4 has been applied to ChatGPT and Bing, once again refreshing the outside world's perception of AI capabilities.
The brighter GPT-4 shines, the more anxious competitors like Google and Baidu become. After all, while other companies were busy benchmarking GPT-3.5, OpenAI quickly upgraded its big model to GPT-4, and rode the wave of success like a lonely killer looking for a challenger.
GPT-4 defeats GPT-3.5
He and some of his old friends didn't sleep well because OpenAI released GPT-4 in the early morning of March 15, and the message alerts in WeChat were all over the place.
The release of GPT-4 in the early morning of March 15 Beijing time was highly anticipated by all, which was the inevitable result of the fire of ChatGPT using GPT-3.5. Watchers wanted to know just how much better it was than GPT-3.5. People weren't even interested in comparing it to other similar products, because the one that can stand on the same starting line with it hasn't appeared yet.
OpenAI knew the appetite of the onlookers, and the developers let GPT-3.5 and GPT-4 try to answer the same questions in the launch event, which was more like a product demo. As a result, those who stayed up late did not wait in vain.
At first, the OpenAI developers copied a blog post and gave it to GPT-3.5 to summarize, asking it to start each word with a "G". As a result, GPT-3.5 simply chose to give up. When it was GPT-4's turn, it gave a quick answer and met the requirement perfectly. Then the developers asked it to try to answer the same question starting with the letter "A", and GPT-4 did it again.
As if to enhance the "show", the developers interacted directly with the Discord community and chose the letter "Q" as suggested by the users. Again, GPT-4 was still easy to handle.
For this round of demonstration, OpenAI specifically chose a problem that exceeded the GPT-3.5 "threshold" to test the evolutionary level of GPT-4's capabilities. As officially explained, the difference between GPT-3.5 and GPT-4 can be subtle in casual conversation, and the difference emerges when the complexity of the task reaches a sufficient threshold - GPT-4 is more reliable, more creative, and capable of handling more subtle instructions than GPT-3.5.
To fully and visually assess the extent of GPT-4's improved capabilities, OpenAI demonstrated its participation in several mock exams with GPT-3.5, including the Uniform Bar Exam, graduate school entrance exam, medical knowledge self-assessment, art history, calculus, etc. The results showed that GPT-4 nearly crushed GPT-3.5. For example, on the mock bar exam, GPT-4 scores were in the top 10% of test takers, while GPT-3.5 scores were in the bottom 10%.

Just like Apple's launch brings a big egg every time, GPT-4 also brings a leap forward feature - accepting visual input. This means that GPT-4 is now able to read pictures.
The official explanation: GPT-4 can generate text output (natural language, code, etc.) given an input consisting of scattered text and images. In other words, give it an image with text and ask for it, and it will give the desired result.
In the demo, the developer drew a sketch of a website and asked GPT-4 to turn this sketch into a colorful website using short HTML/JS. After just a few seconds, GPT-4 delivered a complete web page.
Not only that, GPT-4 was able to try to understand some popular "stems". As shown in the picture below, GPT-4 not only got it, but also explained it in a serious way.
The only way to know if GPT-4's map-reading capabilities are as powerful as OpenAI says they are is to experience them. Unfortunately, the visual input is not yet fully open and is only being tested with a small group of developers, a move that OpenAI founder Sam Altman explains is to prevent possible security and ethical issues.
GPT-4 also has a special ability to play different roles and speak in different ways, unlike GPT-3.5, which has a fixed tone and style. Based on this feature, users can let GPT-4 realize role-playing and customize its character.
Just like the generations of new iPhones that gave people a sense of wonder, GPT-4 shows a more powerful capability than its predecessor. However, it is not perfect. Like GPT-3.5, GPT-4 still sometimes fudges the truth, and "serious nonsense" cannot be completely avoided. openAI claims that GPT-4 scores 40% higher than GPT-3.5 in internal adversarial veracity evaluations, so it clearly has a lot of room for improvement.
Google and Baidu are more anxious
OpenAI is on the same path as Apple was: becoming a leader and pulling away hard while others are struggling to catch up.
Just before the release of GPT-4, Internet giant Google also released a trailer on YouTube announcing the integration of AI into office applications such as Gmail email and GoogleDocs documents. Google went to great lengths to showcase the features in the video, telling people they could brainstorm, proofread, write and rewrite in documents; use automatically generated images, audio and video from slideshows to bring creative ideas to life, and more.
However, "there was no splash at all, and when GPT-4 opened for launch a few hours later, the people all ran out at once." This is how Wakatabe described his observation. Looking at the direction of public opinion on social networks, Google's new AI moves were overwhelmed by the overwhelming news of GPT-4 - whether on Twitter overseas or on Weibo at home, GPT-4 was on the hot search list.
Not long ago, Meta announced its new large language model for AI, LLaMA, which it claims will help researchers reduce the "bias, toxic comments, and potential for misinformation" that generative AI tools can bring. Meta also claims that this big model can match the performance of mainstream big models such as OpenAI GPT-3 and Google PaLM with only about 1/10th of the parameter size. This new development did not show up again in the public opinion arena after the arrival of GPT-4.
The more OpenAI shines, the more anxious the other tech giants become.
In February, Google, which rushed into the ChatGPT, made a joke when its chatbot Bard made its debut and flopped, answering the wrong question and causing its market value to evaporate by about $100 billion in a day.
In China, Baidu is also rushing to develop a chatbot similar to ChatGPT, "Wenxin Yiyin". According to the preview, Baidu will hold a press conference about Wenxin Yiyin this afternoon. While many people may still be wondering whether Wenxin Yiyin can match ChatGPT under the GPT-3.5 model, OpenAI has brought a more powerful GPT-4 before Baidu's press conference.
While others were busy benchmarking GPT-3.5, OpenAI defeated its own GPT-3.5 with its own hands, just like a cold-blooded killer, and netizens have already made up their own "stems" for how anxious the competing companies are.
"I can't help but feel that even in the U.S., this has nothing to do with most U.S. technology companies." Wakatabe described the speed of OpenAI iteration as making him feel alarmed, "All the people and companies trying to catch up are currently at least two years behind the progress. In this era of explosive AI growth, two years is three lifetimes."
Looking back at the development of GPT, it has made a quantitative to qualitative leap in five years. 2018 saw the first release of GPT-1, which had only 117 million model parameters, followed by GPT-2, which raised the bar to 1.5 billion parameters, GPT-3 and GPT-3.5, which boosted the neural network directly to 175 billion parameters, and GPT-4, which employs more than 200 billion and utilizes over 2 million data sources (GPT-3.5 used 450,000), including a variety of text, image, audio, and video data from the Internet.
In comparison, Google had disclosed its LaMDA model parameters of 137 billion in early 2022, not as many as GPT-3 at that time. And according to Baidu, the size of Wenxin Yiyin's big model parameters reached 260 billion, more compared to GPT-4, which might still leave it to some expectations.
However, some experts point out that the model parameters are not the absolute factor that determines the ability of AI chatbots; on this basis, the cleaning and labeling of the data, the design of the model structure, and the accumulation of technology for training and reasoning will all determine the performance of the final product.
GPT-4 doesn't even have to worry about productization anymore, it has been applied to ChatGPT and Microsoft's search engine Bing (Bing). After a ride, there are Google and Baidu looming in the dust.
The day before the release of Baidu's "Wenxin Yiyin", OpenAI launched GPT-4, which may be a major blow to Baidu and Google.
People have already experienced ChatGPT with GPT-3.5, but GPT-4 is more powerful than its "predecessor", with higher reliability and accuracy, the ability to read images and even role play. Currently, GPT-4 has been applied to ChatGPT and Bing, once again refreshing the outside world's perception of AI capabilities.
The brighter GPT-4 shines, the more anxious competitors like Google and Baidu become. After all, while other companies were busy benchmarking GPT-3.5, OpenAI quickly upgraded its big model to GPT-4, and rode the wave of success like a lonely killer looking for a challenger.
GPT-4 defeats GPT-3.5
He and some of his old friends didn't sleep well because OpenAI released GPT-4 in the early morning of March 15, and the message alerts in WeChat were all over the place.
The release of GPT-4 in the early morning of March 15 Beijing time was highly anticipated by all, which was the inevitable result of the fire of ChatGPT using GPT-3.5. Watchers wanted to know just how much better it was than GPT-3.5. People weren't even interested in comparing it to other similar products, because the one that can stand on the same starting line with it hasn't appeared yet.
OpenAI knew the appetite of the onlookers, and the developers let GPT-3.5 and GPT-4 try to answer the same questions in the launch event, which was more like a product demo. As a result, those who stayed up late did not wait in vain.
At first, the OpenAI developers copied a blog post and gave it to GPT-3.5 to summarize, asking it to start each word with a "G". As a result, GPT-3.5 simply chose to give up. When it was GPT-4's turn, it gave a quick answer and met the requirement perfectly. Then the developers asked it to try to answer the same question starting with the letter "A", and GPT-4 did it again.
As if to enhance the "show", the developers interacted directly with the Discord community and chose the letter "Q" as suggested by the users. Again, GPT-4 was still easy to handle.
For this round of demonstration, OpenAI specifically chose a problem that exceeded the GPT-3.5 "threshold" to test the evolutionary level of GPT-4's capabilities. As officially explained, the difference between GPT-3.5 and GPT-4 can be subtle in casual conversation, and the difference emerges when the complexity of the task reaches a sufficient threshold - GPT-4 is more reliable, more creative, and capable of handling more subtle instructions than GPT-3.5.
To fully and visually assess the extent of GPT-4's improved capabilities, OpenAI demonstrated its participation in several mock exams with GPT-3.5, including the Uniform Bar Exam, graduate school entrance exam, medical knowledge self-assessment, art history, calculus, etc. The results showed that GPT-4 nearly crushed GPT-3.5. For example, on the mock bar exam, GPT-4 scores were in the top 10% of test takers, while GPT-3.5 scores were in the bottom 10%.

Just like Apple's launch brings a big egg every time, GPT-4 also brings a leap forward feature - accepting visual input. This means that GPT-4 is now able to read pictures.
The official explanation: GPT-4 can generate text output (natural language, code, etc.) given an input consisting of scattered text and images. In other words, give it an image with text and ask for it, and it will give the desired result.
In the demo, the developer drew a sketch of a website and asked GPT-4 to turn this sketch into a colorful website using short HTML/JS. After just a few seconds, GPT-4 delivered a complete web page.
Not only that, GPT-4 was able to try to understand some popular "stems". As shown in the picture below, GPT-4 not only got it, but also explained it in a serious way.
The only way to know if GPT-4's map-reading capabilities are as powerful as OpenAI says they are is to experience them. Unfortunately, the visual input is not yet fully open and is only being tested with a small group of developers, a move that OpenAI founder Sam Altman explains is to prevent possible security and ethical issues.
GPT-4 also has a special ability to play different roles and speak in different ways, unlike GPT-3.5, which has a fixed tone and style. Based on this feature, users can let GPT-4 realize role-playing and customize its character.
Just like the generations of new iPhones that gave people a sense of wonder, GPT-4 shows a more powerful capability than its predecessor. However, it is not perfect. Like GPT-3.5, GPT-4 still sometimes fudges the truth, and "serious nonsense" cannot be completely avoided. openAI claims that GPT-4 scores 40% higher than GPT-3.5 in internal adversarial veracity evaluations, so it clearly has a lot of room for improvement.
Google and Baidu are more anxious
OpenAI is on the same path as Apple was: becoming a leader and pulling away hard while others are struggling to catch up.
Just before the release of GPT-4, Internet giant Google also released a trailer on YouTube announcing the integration of AI into office applications such as Gmail email and GoogleDocs documents. Google went to great lengths to showcase the features in the video, telling people they could brainstorm, proofread, write and rewrite in documents; use automatically generated images, audio and video from slideshows to bring creative ideas to life, and more.
However, "there was no splash at all, and when GPT-4 opened for launch a few hours later, the people all ran out at once." This is how Wakatabe described his observation. Looking at the direction of public opinion on social networks, Google's new AI moves were overwhelmed by the overwhelming news of GPT-4 - whether on Twitter overseas or on Weibo at home, GPT-4 was on the hot search list.
Not long ago, Meta announced its new large language model for AI, LLaMA, which it claims will help researchers reduce the "bias, toxic comments, and potential for misinformation" that generative AI tools can bring. Meta also claims that this big model can match the performance of mainstream big models such as OpenAI GPT-3 and Google PaLM with only about 1/10th of the parameter size. This new development did not show up again in the public opinion arena after the arrival of GPT-4.
The more OpenAI shines, the more anxious the other tech giants become.
In February, Google, which rushed into the ChatGPT, made a joke when its chatbot Bard made its debut and flopped, answering the wrong question and causing its market value to evaporate by about $100 billion in a day.
In China, Baidu is also rushing to develop a chatbot similar to ChatGPT, "Wenxin Yiyin". According to the preview, Baidu will hold a press conference about Wenxin Yiyin this afternoon. While many people may still be wondering whether Wenxin Yiyin can match ChatGPT under the GPT-3.5 model, OpenAI has brought a more powerful GPT-4 before Baidu's press conference.
While others were busy benchmarking GPT-3.5, OpenAI defeated its own GPT-3.5 with its own hands, just like a cold-blooded killer, and netizens have already made up their own "stems" for how anxious the competing companies are.
"I can't help but feel that even in the U.S., this has nothing to do with most U.S. technology companies." Wakatabe described the speed of OpenAI iteration as making him feel alarmed, "All the people and companies trying to catch up are currently at least two years behind the progress. In this era of explosive AI growth, two years is three lifetimes."
Looking back at the development of GPT, it has made a quantitative to qualitative leap in five years. 2018 saw the first release of GPT-1, which had only 117 million model parameters, followed by GPT-2, which raised the bar to 1.5 billion parameters, GPT-3 and GPT-3.5, which boosted the neural network directly to 175 billion parameters, and GPT-4, which employs more than 200 billion and utilizes over 2 million data sources (GPT-3.5 used 450,000), including a variety of text, image, audio, and video data from the Internet.
In comparison, Google had disclosed its LaMDA model parameters of 137 billion in early 2022, not as many as GPT-3 at that time. And according to Baidu, the size of Wenxin Yiyin's big model parameters reached 260 billion, more compared to GPT-4, which might still leave it to some expectations.
However, some experts point out that the model parameters are not the absolute factor that determines the ability of AI chatbots; on this basis, the cleaning and labeling of the data, the design of the model structure, and the accumulation of technology for training and reasoning will all determine the performance of the final product.
GPT-4 doesn't even have to worry about productization anymore, it has been applied to ChatGPT and Microsoft's search engine Bing (Bing). After a ride, there are Google and Baidu looming in the dust.
No activity yet