Share Dialog
Share Dialog

Subscribe to songchild

Subscribe to songchild
<100 subscribers
<100 subscribers
Looking back at history is always dramatic.
In May 2021, at the Google I/O conference, a demo of chatbot LaMDA playing the role of Pluto talking to humans was demonstrated, and immediately won the applause of the audience. At this time, six years had passed since Google set the goal of "AI First", and nearly 18 months had passed since the release of OpenAI's world-shaking ChatGPT.
At this point, Google is still a pioneer in the AI field. But two of the key engineers behind the project, De Freitas and Shazeer, are frustrated.
They wanted to be able to show the public the case for LaMDA's entry into Google's assistant, but over the years, the chatbot project has gone through many reviews and has been barred from releasing a broader version for a variety of reasons.
A year earlier, OpenAI had released GPT3 with 175 billion parameters and opened the API for testing. Google, however, was hesitant to make the product of the conversation model available to the public because of various risks of 'technical political correctness'.
De Freitas, Shazeer therefore want to go, despite CEO Pichai personally to retain, but ultimately, the two left Google at the end of 2021, founded Character AI - currently one of the unicorns in the field of AI big model.
Google thus lost its first-mover advantage to lead the change.
Later, the story became more widely publicized: at the end of 2022, ChatGPT came out, which not only made OpenAI famous, but also caused its investor Microsoft to kill in four directions, and with the support of GPT-4, Microsoft launched the search product New Bing, which was aimed at Google. Not only Google, but also the whole Silicon Valley and even the world were shaken.
Eight months later, on the other side of the ocean, the amazement brought by OpenAI has passed, Silicon Valley giants have passed the panic period, and have found their positions in the brand new battlefield, and startups are going forward and backward, while in China, the 100-mode war is a different scene.
In the past six months of rapid technological and commercial changes, the industry's knowledge and consensus on big models have been constantly updated. After exchanging ideas with nearly 100 entrepreneurs, investors and practitioners in Silicon Valley and China, Geek Park summarized 5 status quo situations of big model startups, and attempted to present a yet-to-be-validated "big model business worldview".
Google's missed opportunity and OpenAI's stunning debut remind us that lagging behind and having the first opportunity are alternating. The current technological and business evolution is far from over, and the real change has not even begun. All one needs to remember is that innovation can happen anywhere, anytime.
Note: The full article consists of 14,573 words and will take about 30 minutes to read. We recommend following and bookmarking it.
Schrödinger's OpenAI: Everyone's Hero, Could Be Everyone's Enemy
While China's startup community is still considering OpenAI as the new god of Silicon Valley, Silicon Valley has quietly started to get rid of OpenAI.
While OpenAI has achieved technological breakthroughs and remains a magnet for AI talent - with many companies claiming to snipe talent from OpenAI - the truth is that, to this day, OpenAI's top tech talent remains a "net inflow. "A net inflow of top technical talent to OpenAI today. Its belief in AGI and the vision-driven technological breakthroughs of the past eight years have made the company a hero to many.
But heroes have to live their lives, too. The next step in technological breakthroughs is to create value to create a business cycle. The debate in Silicon Valley is, can OpenAI really stay ahead?
Several Silicon Valley entrepreneurs, practitioners, and investors coincidentally expressed negative judgment to Geek Park. What people question is that, as of now, the business model presented by OpenAI has hidden worries - in the non-consensus stage of the "former ChatGPT", OpenAI could still win resources with the beliefs of a few people, but now that AGI has become a consensus and there are a lot of competitors, the challenge and difficulty of maintaining the lead will skyrocket.
If we categorize business profit models into toB and toC, OpenAI does not have the toB gene, and it has strong rivals - in terms of enterprise services, Microsoft, OpenAI's investor, is the king of the field, with over 65% of the market share of Microsoft's enterprise chat apps, and its Teams has gradually eaten up the market of the star company Slack in recent years. Its Teams has gradually eaten into the market of the star company Slack in recent years. In the toB space, Microsoft, which has been around for 48 years and has been through several technology cycles, has undoubtedly accumulated more experience than startup OpenAI.
But OpenAI also faces questions about the risks of centralization if it wants to do business in the toB space. At present, OpenAI's open API model for enterprises has attracted a number of customers to use it - especially small and medium-sized developers, who can't afford to train a large model independently, and access to the GPT series of APIs has become an excellent choice. Jasper.AI, which has just become a unicorn, is one of the best examples. By accessing GPT3, Jsaper.AI was valued at $1.5 billion in only 18 months after its establishment.
AI is valued at $1.5 billion after only 18 months of existence. "But people don't think highly of Jasper.AI because of this," an investor from a mainstream Silicon Valley fund told Geek Park. Private data is an enterprise's most important asset, and at this point, accessing it to a centralized model is first and foremost a compliance and security issue - although Sam Altman promised in May that OpenAI would not train on data from customers using APIs - but that's neither a disincentive nor a reason for enterprises to use the APIs to train. -but this neither assuages the enterprise's concerns nor gains its trust.
"Some enterprise customers in the U.S. are generally concerned about using OpenAI," the investor told us. The investor told us that, in the eyes of enterprises, OpenAI is the closest thing they have to AWS in the cloud, but they don't face it with the same logic they face AWS. "Customers are generally reluctant to hand over their data and key competencies to OpenAI because of the risks involved. Even if the GPT series can help train big models in the vertical domain with centralized capabilities, the competitiveness built with it is dangerous for customers - it will be a kind of "star-sucking": if their own data and experience can eventually be called by others, it will make the competitive barriers of the industry leaders fall.
And what about doing toC?
It looks like OpenAI has a user advantage in the C-suite. Since the release of its super product ChatGPT, its monthly activity has climbed all the way to 1.5 billion, compared to Instagram's 2 billion. However, the huge monthly activity may not necessarily bring OpenAI a data flywheel effect - "(through) users continue to ask questions to do (big model) training, the data value is not great," one entrepreneur pointed out. A startup founder pointed out.
It's worth noting that ChatGPT's monthly activity has declined for the first time since June. Speculations on this include:
Because the novelty of technology has waned - in fact, ChatGPT's visit growth rate has been slipping, ringing in at just 2.8 percent in May; Student vacations have led to lower student usage; As well as a more serious speculation, ChatGPT's declining answer quality has led to a drop in usage - when GPT-4 was first launched, it was slower and the quality of the answers was higher, whereas an update a few weeks ago saw user feedback that the speed of the answers had increased but the quality of the answers had declined perceptibly. What's more, giants including Google, Meta, Apple and others will also center their efforts around toC products. For example, Google has reintegrated two internal teams, Brain and Deepmind, in order to suppress OpenAI's advantage in technology. "With the former's massive user base, OpenAI's existing subscription revenues could be jeopardized if the giant launches a free product." OpenAI needs to maintain its technological barriers, and if they are broken, it could easily be attacked by giants with a price advantage.
OpenAI today is like Schrödinger's cat, it is hard to tell if its future is clear, but what is certain is that it will be under the close scrutiny of all the giants.
The top companies have their pick of the litter, but they're all on the same page in terms of their goals.
So what are the giants of Silicon Valley doing?
Geek Park talked to a large number of Silicon Valley practitioners and found that the panic period for Silicon Valley's giant companies is basically over compared to when ChatGPT first debuted. These mall veterans have quickly established their own mountains and accelerated their technology push to defend the quadrants they excel in and ensure they are not disrupted.
Their consistent approach is to expand the layout along their existing strengths, looking for big models that can help them, and even the direction of possible disruptive innovation, on the one hand, to reinforce their business advantages, to prevent rivals from sudden attacks; on the other hand, but also for the possible emergence of a new battlefield to lay a head start.
Zhang Peng, the founder and CEO of Geek Park, once explained such an observation in his speech at the AGI Playground conference, which attracted a lot of attention not long ago.
ChatGPT came out of nowhere, Microsoft immediately joined hands with ChatGPT to launch New Bing - because of the fear that GPT4-enhanced New Bing will shake the foundation of Google's search engine, Google chose to respond to the war in February in haste - - release Bard, which gave the outside world the impression of a big mess, and also affected the confidence of the capital market. -Released Bard, leaving the outside world the impression of a big mess, but also affected the outflow of talent and the confidence of the capital market.
However, the latest second-quarter earnings report showed more than expected growth, coupled with the comprehensive technology layout shown at the previous I/O conference, and successfully let the outside world to regain confidence in it.
This also seems to confirm that it is not so easy to use a paradigm revolution to disrupt the current giants in an already existing market.
As early as 2015, Google set the goal of AI First. However, due to internal reasons, it missed the opportunity to lead generative AI.After Bard, Google changed the support model behind it from LamDA, a lightweight dialog model, to Google's self-developed PaLM model. In May this year, Google released an upgraded version of the PaLM2 model, and added new features brought by generative AI to many products at once, including Gmail, Google Maps and other products, and was very active in C-end standardized products. Among them, the two that attracted the most attention from the outside world are the lightweight version of PaLM2, Gecko, which can run on the end side, and the Gemini multimodal model under development.
Of course, Google's strong genes in the C-side, this advantage is also considered to be a possible constraint on its exploration of the B-side - in fact, Google is also going to make an effort in the B-side: in addition to the Google TPU, the addition of A3 AI supercomputer based on the Nvidia H100, as well as the enterprise-oriented AI platform vertex AI.
For Google, the current situation is undoubtedly dangerous. In the era of big models, the field of search will be many rivals involved, as a search giant, Google may be sniped at any time, good defense is its dead door.
Geek Park learned that after the panic at the beginning of the year, the giant has calmed down and started its own actions. in April, Google merged Deep Mind and Google Brain and reorganized it into Google DeepMind, which is firmly believed that AGI's DeepMind co-founder Demis Hassabis served as the departmental leader, and the former Google In April, Google reorganized Deep Mind and Google Brain into Google DeepMind, with Demis Hassabis, co-founder of DeepMind, as department leader, and Jeff Dean, former head of Google Brain, as Google's Chief Scientist - a structural adjustment that not only gathers further resources, but also shows Google's determination to catch up.
After the merger of Google DeepMind and Google Research, the goal is to overcome a number of key AI projects, the first of which is the multimodal model. There are rumors that Google is using Youtube video data to train Gemini, which increases speculation about whether Google will take the lead, given that the next key technology for big models will be multimodal.
After all, Google has 3 billion users and a strong technical capability on top of tens of billions of dollars a year in revenue - which allows it to use its scale advantage to hold its security citadel even if it doesn't react fast enough, as long as it doesn't fall behind technologically.
Looking at the commercial layout of the giant, Geek Park summarized several conclusions after extensive exchanges:
The period of panic among the giants has ended, and they have re-targeted their own goals, the core of which is to maintain their highest position in the industry, and at the same time, if there is an opportunity to attack their competitors, they naturally will not miss it. Before the paradigm revolution of the big model, theoretically, any mature company that utilizes the big model well may have the ability to launch a blitzkrieg against the giants, while any giant that fails to act fast enough to incorporate the big model into its products risks being downgraded to sneak up on them - such as what Microsoft Search is to Google Search, and what Microsoft Cloud Services is to Amazon Cloud Services. The possibilities presented by big models are vast and unknown, and previously established business boundaries will blur anew. The purpose of giants training their own centralized big models is not the same as OpenAI's oft-talked-about "reaching AGI," but is more strategic and defensive. Except for Google, whose business is strongly related to Chatbox (chatbot), each company may not force to train a world-class ChatGPT-type product, but is more concerned about using large models to defend their business and have the ability to counter blitzkrieg. But because the technology is still in development, want to use the big model to launch a blitzkrieg to subvert rivals, or use the big model itself to benefit from scale, but also far from easy to imagine: Microsoft since the launch of the New Bing in February this year, was once thought to be more than the traffic growth of Google, but since April, there are reports showing that Bing's share of the search does not rise, but rather decline, as of July, Google's search position has not been shaken! Trend. And an entrepreneur who plans to use big models to serve the toB field told Geek Park that giants wanting to use big models to provide standardized services will also be caught up in fierce competition to a certain extent: his SaaS company, for example, has backend access to a number of big models (language models, translation models, etc.)-OpenAI, Google OpenAI, Google, open source models, etc. "Let them roll with price and performance," the entrepreneur said. In addition, Silicon Valley in the era of big models, "Brain Drain (brain drain) is very real". Multiple practitioners told Geek Park. Both historically and currently, any giant that fails to build a competitive business using big models will quickly lose top AI engineers. As early as 2022, Meta's focus on meta-universe concepts led to a number of senior AI experts jumping ship and the collapse of its London branch, while OpenAI poached more than a hundred people from Google in its early days to expand its business. The top AI programmers who leave a company are basically impossible to return in the short term. Finally, cloud computing in the AGI era is still an absolute giant's race - cloud computing itself is a giant's business, and training big models requires huge computing power. Just as under the gold rush, the one who earns money will be the one who sells shovels - at the moment when there is high uncertainty in both the underlying layer of the big model and the application, the cloud vendors will surely make profits from it, and in the present, how to provide 'better cloud services', such as optimizing the calculation results with lower arithmetic power and meeting the needs and scenarios of model training, will be a huge advantage.
Looking back at history is always dramatic.
In May 2021, at the Google I/O conference, a demo of chatbot LaMDA playing the role of Pluto talking to humans was demonstrated, and immediately won the applause of the audience. At this time, six years had passed since Google set the goal of "AI First", and nearly 18 months had passed since the release of OpenAI's world-shaking ChatGPT.
At this point, Google is still a pioneer in the AI field. But two of the key engineers behind the project, De Freitas and Shazeer, are frustrated.
They wanted to be able to show the public the case for LaMDA's entry into Google's assistant, but over the years, the chatbot project has gone through many reviews and has been barred from releasing a broader version for a variety of reasons.
A year earlier, OpenAI had released GPT3 with 175 billion parameters and opened the API for testing. Google, however, was hesitant to make the product of the conversation model available to the public because of various risks of 'technical political correctness'.
De Freitas, Shazeer therefore want to go, despite CEO Pichai personally to retain, but ultimately, the two left Google at the end of 2021, founded Character AI - currently one of the unicorns in the field of AI big model.
Google thus lost its first-mover advantage to lead the change.
Later, the story became more widely publicized: at the end of 2022, ChatGPT came out, which not only made OpenAI famous, but also caused its investor Microsoft to kill in four directions, and with the support of GPT-4, Microsoft launched the search product New Bing, which was aimed at Google. Not only Google, but also the whole Silicon Valley and even the world were shaken.
Eight months later, on the other side of the ocean, the amazement brought by OpenAI has passed, Silicon Valley giants have passed the panic period, and have found their positions in the brand new battlefield, and startups are going forward and backward, while in China, the 100-mode war is a different scene.
In the past six months of rapid technological and commercial changes, the industry's knowledge and consensus on big models have been constantly updated. After exchanging ideas with nearly 100 entrepreneurs, investors and practitioners in Silicon Valley and China, Geek Park summarized 5 status quo situations of big model startups, and attempted to present a yet-to-be-validated "big model business worldview".
Google's missed opportunity and OpenAI's stunning debut remind us that lagging behind and having the first opportunity are alternating. The current technological and business evolution is far from over, and the real change has not even begun. All one needs to remember is that innovation can happen anywhere, anytime.
Note: The full article consists of 14,573 words and will take about 30 minutes to read. We recommend following and bookmarking it.
Schrödinger's OpenAI: Everyone's Hero, Could Be Everyone's Enemy
While China's startup community is still considering OpenAI as the new god of Silicon Valley, Silicon Valley has quietly started to get rid of OpenAI.
While OpenAI has achieved technological breakthroughs and remains a magnet for AI talent - with many companies claiming to snipe talent from OpenAI - the truth is that, to this day, OpenAI's top tech talent remains a "net inflow. "A net inflow of top technical talent to OpenAI today. Its belief in AGI and the vision-driven technological breakthroughs of the past eight years have made the company a hero to many.
But heroes have to live their lives, too. The next step in technological breakthroughs is to create value to create a business cycle. The debate in Silicon Valley is, can OpenAI really stay ahead?
Several Silicon Valley entrepreneurs, practitioners, and investors coincidentally expressed negative judgment to Geek Park. What people question is that, as of now, the business model presented by OpenAI has hidden worries - in the non-consensus stage of the "former ChatGPT", OpenAI could still win resources with the beliefs of a few people, but now that AGI has become a consensus and there are a lot of competitors, the challenge and difficulty of maintaining the lead will skyrocket.
If we categorize business profit models into toB and toC, OpenAI does not have the toB gene, and it has strong rivals - in terms of enterprise services, Microsoft, OpenAI's investor, is the king of the field, with over 65% of the market share of Microsoft's enterprise chat apps, and its Teams has gradually eaten up the market of the star company Slack in recent years. Its Teams has gradually eaten into the market of the star company Slack in recent years. In the toB space, Microsoft, which has been around for 48 years and has been through several technology cycles, has undoubtedly accumulated more experience than startup OpenAI.
But OpenAI also faces questions about the risks of centralization if it wants to do business in the toB space. At present, OpenAI's open API model for enterprises has attracted a number of customers to use it - especially small and medium-sized developers, who can't afford to train a large model independently, and access to the GPT series of APIs has become an excellent choice. Jasper.AI, which has just become a unicorn, is one of the best examples. By accessing GPT3, Jsaper.AI was valued at $1.5 billion in only 18 months after its establishment.
AI is valued at $1.5 billion after only 18 months of existence. "But people don't think highly of Jasper.AI because of this," an investor from a mainstream Silicon Valley fund told Geek Park. Private data is an enterprise's most important asset, and at this point, accessing it to a centralized model is first and foremost a compliance and security issue - although Sam Altman promised in May that OpenAI would not train on data from customers using APIs - but that's neither a disincentive nor a reason for enterprises to use the APIs to train. -but this neither assuages the enterprise's concerns nor gains its trust.
"Some enterprise customers in the U.S. are generally concerned about using OpenAI," the investor told us. The investor told us that, in the eyes of enterprises, OpenAI is the closest thing they have to AWS in the cloud, but they don't face it with the same logic they face AWS. "Customers are generally reluctant to hand over their data and key competencies to OpenAI because of the risks involved. Even if the GPT series can help train big models in the vertical domain with centralized capabilities, the competitiveness built with it is dangerous for customers - it will be a kind of "star-sucking": if their own data and experience can eventually be called by others, it will make the competitive barriers of the industry leaders fall.
And what about doing toC?
It looks like OpenAI has a user advantage in the C-suite. Since the release of its super product ChatGPT, its monthly activity has climbed all the way to 1.5 billion, compared to Instagram's 2 billion. However, the huge monthly activity may not necessarily bring OpenAI a data flywheel effect - "(through) users continue to ask questions to do (big model) training, the data value is not great," one entrepreneur pointed out. A startup founder pointed out.
It's worth noting that ChatGPT's monthly activity has declined for the first time since June. Speculations on this include:
Because the novelty of technology has waned - in fact, ChatGPT's visit growth rate has been slipping, ringing in at just 2.8 percent in May; Student vacations have led to lower student usage; As well as a more serious speculation, ChatGPT's declining answer quality has led to a drop in usage - when GPT-4 was first launched, it was slower and the quality of the answers was higher, whereas an update a few weeks ago saw user feedback that the speed of the answers had increased but the quality of the answers had declined perceptibly. What's more, giants including Google, Meta, Apple and others will also center their efforts around toC products. For example, Google has reintegrated two internal teams, Brain and Deepmind, in order to suppress OpenAI's advantage in technology. "With the former's massive user base, OpenAI's existing subscription revenues could be jeopardized if the giant launches a free product." OpenAI needs to maintain its technological barriers, and if they are broken, it could easily be attacked by giants with a price advantage.
OpenAI today is like Schrödinger's cat, it is hard to tell if its future is clear, but what is certain is that it will be under the close scrutiny of all the giants.
The top companies have their pick of the litter, but they're all on the same page in terms of their goals.
So what are the giants of Silicon Valley doing?
Geek Park talked to a large number of Silicon Valley practitioners and found that the panic period for Silicon Valley's giant companies is basically over compared to when ChatGPT first debuted. These mall veterans have quickly established their own mountains and accelerated their technology push to defend the quadrants they excel in and ensure they are not disrupted.
Their consistent approach is to expand the layout along their existing strengths, looking for big models that can help them, and even the direction of possible disruptive innovation, on the one hand, to reinforce their business advantages, to prevent rivals from sudden attacks; on the other hand, but also for the possible emergence of a new battlefield to lay a head start.
Zhang Peng, the founder and CEO of Geek Park, once explained such an observation in his speech at the AGI Playground conference, which attracted a lot of attention not long ago.
ChatGPT came out of nowhere, Microsoft immediately joined hands with ChatGPT to launch New Bing - because of the fear that GPT4-enhanced New Bing will shake the foundation of Google's search engine, Google chose to respond to the war in February in haste - - release Bard, which gave the outside world the impression of a big mess, and also affected the confidence of the capital market. -Released Bard, leaving the outside world the impression of a big mess, but also affected the outflow of talent and the confidence of the capital market.
However, the latest second-quarter earnings report showed more than expected growth, coupled with the comprehensive technology layout shown at the previous I/O conference, and successfully let the outside world to regain confidence in it.
This also seems to confirm that it is not so easy to use a paradigm revolution to disrupt the current giants in an already existing market.
As early as 2015, Google set the goal of AI First. However, due to internal reasons, it missed the opportunity to lead generative AI.After Bard, Google changed the support model behind it from LamDA, a lightweight dialog model, to Google's self-developed PaLM model. In May this year, Google released an upgraded version of the PaLM2 model, and added new features brought by generative AI to many products at once, including Gmail, Google Maps and other products, and was very active in C-end standardized products. Among them, the two that attracted the most attention from the outside world are the lightweight version of PaLM2, Gecko, which can run on the end side, and the Gemini multimodal model under development.
Of course, Google's strong genes in the C-side, this advantage is also considered to be a possible constraint on its exploration of the B-side - in fact, Google is also going to make an effort in the B-side: in addition to the Google TPU, the addition of A3 AI supercomputer based on the Nvidia H100, as well as the enterprise-oriented AI platform vertex AI.
For Google, the current situation is undoubtedly dangerous. In the era of big models, the field of search will be many rivals involved, as a search giant, Google may be sniped at any time, good defense is its dead door.
Geek Park learned that after the panic at the beginning of the year, the giant has calmed down and started its own actions. in April, Google merged Deep Mind and Google Brain and reorganized it into Google DeepMind, which is firmly believed that AGI's DeepMind co-founder Demis Hassabis served as the departmental leader, and the former Google In April, Google reorganized Deep Mind and Google Brain into Google DeepMind, with Demis Hassabis, co-founder of DeepMind, as department leader, and Jeff Dean, former head of Google Brain, as Google's Chief Scientist - a structural adjustment that not only gathers further resources, but also shows Google's determination to catch up.
After the merger of Google DeepMind and Google Research, the goal is to overcome a number of key AI projects, the first of which is the multimodal model. There are rumors that Google is using Youtube video data to train Gemini, which increases speculation about whether Google will take the lead, given that the next key technology for big models will be multimodal.
After all, Google has 3 billion users and a strong technical capability on top of tens of billions of dollars a year in revenue - which allows it to use its scale advantage to hold its security citadel even if it doesn't react fast enough, as long as it doesn't fall behind technologically.
Looking at the commercial layout of the giant, Geek Park summarized several conclusions after extensive exchanges:
The period of panic among the giants has ended, and they have re-targeted their own goals, the core of which is to maintain their highest position in the industry, and at the same time, if there is an opportunity to attack their competitors, they naturally will not miss it. Before the paradigm revolution of the big model, theoretically, any mature company that utilizes the big model well may have the ability to launch a blitzkrieg against the giants, while any giant that fails to act fast enough to incorporate the big model into its products risks being downgraded to sneak up on them - such as what Microsoft Search is to Google Search, and what Microsoft Cloud Services is to Amazon Cloud Services. The possibilities presented by big models are vast and unknown, and previously established business boundaries will blur anew. The purpose of giants training their own centralized big models is not the same as OpenAI's oft-talked-about "reaching AGI," but is more strategic and defensive. Except for Google, whose business is strongly related to Chatbox (chatbot), each company may not force to train a world-class ChatGPT-type product, but is more concerned about using large models to defend their business and have the ability to counter blitzkrieg. But because the technology is still in development, want to use the big model to launch a blitzkrieg to subvert rivals, or use the big model itself to benefit from scale, but also far from easy to imagine: Microsoft since the launch of the New Bing in February this year, was once thought to be more than the traffic growth of Google, but since April, there are reports showing that Bing's share of the search does not rise, but rather decline, as of July, Google's search position has not been shaken! Trend. And an entrepreneur who plans to use big models to serve the toB field told Geek Park that giants wanting to use big models to provide standardized services will also be caught up in fierce competition to a certain extent: his SaaS company, for example, has backend access to a number of big models (language models, translation models, etc.)-OpenAI, Google OpenAI, Google, open source models, etc. "Let them roll with price and performance," the entrepreneur said. In addition, Silicon Valley in the era of big models, "Brain Drain (brain drain) is very real". Multiple practitioners told Geek Park. Both historically and currently, any giant that fails to build a competitive business using big models will quickly lose top AI engineers. As early as 2022, Meta's focus on meta-universe concepts led to a number of senior AI experts jumping ship and the collapse of its London branch, while OpenAI poached more than a hundred people from Google in its early days to expand its business. The top AI programmers who leave a company are basically impossible to return in the short term. Finally, cloud computing in the AGI era is still an absolute giant's race - cloud computing itself is a giant's business, and training big models requires huge computing power. Just as under the gold rush, the one who earns money will be the one who sells shovels - at the moment when there is high uncertainty in both the underlying layer of the big model and the application, the cloud vendors will surely make profits from it, and in the present, how to provide 'better cloud services', such as optimizing the calculation results with lower arithmetic power and meeting the needs and scenarios of model training, will be a huge advantage.
No activity yet