<100 subscribers
Share Dialog
Share Dialog
How do the world's largest hedge funds view AI?
In an interview on Monday, July 3, Bridgewater-connected Chief Investment Officer Greg Jensen systematically talked about Bridgewater's views on AI technology, sharing his thoughts on how Bridgewater invests in AI, how it uses AI investments, and its outlook on AI technology.
How Bridgewater invests in AI
Jensen states:
In restructuring Bridgewater, we also did something that we hadn't done before, which was to get some people to research and invest in things that might not be immediately profitable, but that were our long-term projects. So we set up this AI program with a team of 17 people, led by me. I'm still actively involved in the core of the Bridgewater fund, but the other 16 people are 100% committed to reinventing the Bridgewater fund by way of machine learning. We're going to set up a fund run exclusively by machine learning technology, which is what we're doing now in the lab and pushing the limits of artificial intelligence, machine learning capabilities. Now, trying to set up such a fund is still very problematic. If we use large language models, they have two types of problems. The first is that these models are trained more in the structure of the language, so they usually feed back something that looks like a structurally, grammatically correct answer, but not always an accurate answer. That's a problem. Second, it hallucinates, it makes things up because it's more concerned with the structure of the word or concept that comes up next than whether the concept that comes up next is accurate. Therefore, Jensen believes that AI can help people conceptualize and theorize what they observe, but there is still a long way to go to really use AI to select stocks. Thus, Bridgewater's real focus is on:
But there are other ways to combine that with statistical models and other types of AI. That's what we're really focused on, namely combining large linguistic models that are less accurate with statistical models that are good at accurately describing the past but are terrible at predicting the future. Combining these together, we are starting to build an ecosystem that I believe will enable what the Bridgewater analysts are doing. If this ecosystem is built, we will have the equivalent of millions of investment partners at the same time who are in the upper-middle range. If we have the ability to control AI illusions and errors through statistics, we can get a lot of work done quickly. That's exactly what we're doing in the lab and proving that the process works.
How does Bridgewater invest through AI?
If it were possible to build an ecosystem that encompassed AI and other technologies, how would Bridgewater use this system to make investments?
Jensen believes that statistical AI and large-scale linguistic models can complement each other and play a "right hand" role in Bridgewater's investments:
Statistical AI can take theories, go back and see if they were correct at least in the past, and what their flaws were, and refine them and provide advice on how to do things differently, and then we can talk to them. One advantage that large-scale language models have is to take a complex statistical model and discuss what it is doing. There are ways to train language models to do this. The way we model this situation is that the language model can come up with potential theories. It's not the most creative thing in the world, but it's a theory at scale, that's for sure. Again, large-scale language models are very good, but we have to somehow tweak the language model and we can then use statistics to control it. We can then use the language model again to get the results in the statistics engine and discuss them with humans or other AIs and report what was found, the content, and the type of theory. If the conclusions reached are contrary to what people perceive, then more testing is done. That's the loop I'm very excited about, and as I said, so far statistical AI has been limited because it focuses on market data. The benefit for linguistic models is that it can better understand things that statistical models don't. For example, statistical models of the market don't have the concept of greed, but large-scale linguistic models can understand the concept of greed almost as well - these models have read all the articles about greed and fear and so on. So combining the two now produces a human-like mode of thinking.
What does AI mean for human employees?
As time goes on, computers can do more and more. according to Jensen:
I would say that today, humans are used to only fulfilling roles related to intuition and creativity, and we use computers to memorize and run those rules with constant accuracy. That's only halfway through the transition, and now it's time to take another leap. There is no doubt that AI will change the role played by investment assistants. To be precise, we will still need people to work around these things for the foreseeable future, and it will still take us a while to build the ecosystem of these machine learning agents and so on. Leveraging AI is going to be part of the future of work, and I think it's hard not to leverage these technologies in any knowledge industry. In terms of writing computer programs, we're seeing a huge breakthrough in coding. Now, with AI, people just need to know what they want to code, rather than needing to know how to code, which is a huge breakthrough. So a bunch of people who are not well trained or competent in C++, Python, or whatever can suddenly get what they want much faster. So all of a sudden the skill sets that are needed in the workplace are changing and they're changing in ways that are surprising to many people because it's actually a lot of knowledge work like content creation and so on that people once thought was still in the distant future to be replaced by machines, but is actually close at hand. So the bottom line is that there are so many changes that it's essential to have flexibility in the workplace and to be able to take advantage of any tools that are available.
Can you use AI to manage investments directly?
With a variety of AI investment management tools now appearing in the market, the concern is that with the great development of AI, is it possible that in the future humans will only need to leave their investments to AI?
Jensen argues that:
I think it's both going to lead to accidents and it's very exciting to me. Obviously, I'm excited about the power of AI, and I think there are ways to make good use of it. But at the same time, AI can create a lot of mistakes. Some foundations use GPT to pick stocks, but these fund managers don't really have a deep understanding of AI and the weaknesses that can exist. In one example, in the real estate market, Zillow, a real estate brokerage platform, used AI technology to predict home prices, evaluate them, and enter the market to start buying houses that the AI thought were undervalued. However, Zillow has several problems. One is that while they have a large amount of housing data, that data is happening over a relatively short period of time. So even though they have what appears to be a large number of data points, there is still a macro cycle that affects the assessments that they make. Secondly, they underestimated the disconnect between theory and practice when it was in fact an adversarial market. So, this is obviously a huge problem for Zillow, who had a big impact on the real estate market and then suffered a huge failure. Going back to the stock market, very short-term trading is arguably better suited to machine learning because there is a lot of data through which AI can learn faster. But on the other hand, in the longer term, AI may not be able to play a role. The data is usually like a person's heart rate data over a lifetime. You might think, wow, my heartbeat has been going on for 49 years, and that seems like a lot of data, but when you have a heart attack, that data is completely irrelevant. So even with a lot of data, it can be misleading, and those issues will lead to huge problems with these technologies. So one has to understand these tools, what they are good at and what they are not, and put them together in a way that plays to the strengths of each type of tool and circumvents the weaknesses. There is still a lot of work to be done on large language models, and we can certainly train them through reinforcement learning to make sure they don't make known mistakes.
Are markets still dominated by optimism?
Jensen believes that the market is still dominated by optimism. He said:
The Fed seems to be a little more realistic than the market in terms of the actions it will take. When you look at the market's reaction, you see that it's very optimistic. But we have to note that historically, the market is often prone to over-optimism.
How do the world's largest hedge funds view AI?
In an interview on Monday, July 3, Bridgewater-connected Chief Investment Officer Greg Jensen systematically talked about Bridgewater's views on AI technology, sharing his thoughts on how Bridgewater invests in AI, how it uses AI investments, and its outlook on AI technology.
How Bridgewater invests in AI
Jensen states:
In restructuring Bridgewater, we also did something that we hadn't done before, which was to get some people to research and invest in things that might not be immediately profitable, but that were our long-term projects. So we set up this AI program with a team of 17 people, led by me. I'm still actively involved in the core of the Bridgewater fund, but the other 16 people are 100% committed to reinventing the Bridgewater fund by way of machine learning. We're going to set up a fund run exclusively by machine learning technology, which is what we're doing now in the lab and pushing the limits of artificial intelligence, machine learning capabilities. Now, trying to set up such a fund is still very problematic. If we use large language models, they have two types of problems. The first is that these models are trained more in the structure of the language, so they usually feed back something that looks like a structurally, grammatically correct answer, but not always an accurate answer. That's a problem. Second, it hallucinates, it makes things up because it's more concerned with the structure of the word or concept that comes up next than whether the concept that comes up next is accurate. Therefore, Jensen believes that AI can help people conceptualize and theorize what they observe, but there is still a long way to go to really use AI to select stocks. Thus, Bridgewater's real focus is on:
But there are other ways to combine that with statistical models and other types of AI. That's what we're really focused on, namely combining large linguistic models that are less accurate with statistical models that are good at accurately describing the past but are terrible at predicting the future. Combining these together, we are starting to build an ecosystem that I believe will enable what the Bridgewater analysts are doing. If this ecosystem is built, we will have the equivalent of millions of investment partners at the same time who are in the upper-middle range. If we have the ability to control AI illusions and errors through statistics, we can get a lot of work done quickly. That's exactly what we're doing in the lab and proving that the process works.
How does Bridgewater invest through AI?
If it were possible to build an ecosystem that encompassed AI and other technologies, how would Bridgewater use this system to make investments?
Jensen believes that statistical AI and large-scale linguistic models can complement each other and play a "right hand" role in Bridgewater's investments:
Statistical AI can take theories, go back and see if they were correct at least in the past, and what their flaws were, and refine them and provide advice on how to do things differently, and then we can talk to them. One advantage that large-scale language models have is to take a complex statistical model and discuss what it is doing. There are ways to train language models to do this. The way we model this situation is that the language model can come up with potential theories. It's not the most creative thing in the world, but it's a theory at scale, that's for sure. Again, large-scale language models are very good, but we have to somehow tweak the language model and we can then use statistics to control it. We can then use the language model again to get the results in the statistics engine and discuss them with humans or other AIs and report what was found, the content, and the type of theory. If the conclusions reached are contrary to what people perceive, then more testing is done. That's the loop I'm very excited about, and as I said, so far statistical AI has been limited because it focuses on market data. The benefit for linguistic models is that it can better understand things that statistical models don't. For example, statistical models of the market don't have the concept of greed, but large-scale linguistic models can understand the concept of greed almost as well - these models have read all the articles about greed and fear and so on. So combining the two now produces a human-like mode of thinking.
What does AI mean for human employees?
As time goes on, computers can do more and more. according to Jensen:
I would say that today, humans are used to only fulfilling roles related to intuition and creativity, and we use computers to memorize and run those rules with constant accuracy. That's only halfway through the transition, and now it's time to take another leap. There is no doubt that AI will change the role played by investment assistants. To be precise, we will still need people to work around these things for the foreseeable future, and it will still take us a while to build the ecosystem of these machine learning agents and so on. Leveraging AI is going to be part of the future of work, and I think it's hard not to leverage these technologies in any knowledge industry. In terms of writing computer programs, we're seeing a huge breakthrough in coding. Now, with AI, people just need to know what they want to code, rather than needing to know how to code, which is a huge breakthrough. So a bunch of people who are not well trained or competent in C++, Python, or whatever can suddenly get what they want much faster. So all of a sudden the skill sets that are needed in the workplace are changing and they're changing in ways that are surprising to many people because it's actually a lot of knowledge work like content creation and so on that people once thought was still in the distant future to be replaced by machines, but is actually close at hand. So the bottom line is that there are so many changes that it's essential to have flexibility in the workplace and to be able to take advantage of any tools that are available.
Can you use AI to manage investments directly?
With a variety of AI investment management tools now appearing in the market, the concern is that with the great development of AI, is it possible that in the future humans will only need to leave their investments to AI?
Jensen argues that:
I think it's both going to lead to accidents and it's very exciting to me. Obviously, I'm excited about the power of AI, and I think there are ways to make good use of it. But at the same time, AI can create a lot of mistakes. Some foundations use GPT to pick stocks, but these fund managers don't really have a deep understanding of AI and the weaknesses that can exist. In one example, in the real estate market, Zillow, a real estate brokerage platform, used AI technology to predict home prices, evaluate them, and enter the market to start buying houses that the AI thought were undervalued. However, Zillow has several problems. One is that while they have a large amount of housing data, that data is happening over a relatively short period of time. So even though they have what appears to be a large number of data points, there is still a macro cycle that affects the assessments that they make. Secondly, they underestimated the disconnect between theory and practice when it was in fact an adversarial market. So, this is obviously a huge problem for Zillow, who had a big impact on the real estate market and then suffered a huge failure. Going back to the stock market, very short-term trading is arguably better suited to machine learning because there is a lot of data through which AI can learn faster. But on the other hand, in the longer term, AI may not be able to play a role. The data is usually like a person's heart rate data over a lifetime. You might think, wow, my heartbeat has been going on for 49 years, and that seems like a lot of data, but when you have a heart attack, that data is completely irrelevant. So even with a lot of data, it can be misleading, and those issues will lead to huge problems with these technologies. So one has to understand these tools, what they are good at and what they are not, and put them together in a way that plays to the strengths of each type of tool and circumvents the weaknesses. There is still a lot of work to be done on large language models, and we can certainly train them through reinforcement learning to make sure they don't make known mistakes.
Are markets still dominated by optimism?
Jensen believes that the market is still dominated by optimism. He said:
The Fed seems to be a little more realistic than the market in terms of the actions it will take. When you look at the market's reaction, you see that it's very optimistic. But we have to note that historically, the market is often prone to over-optimism.
No comments yet