<100 subscribers

The Metaverse is getting a lot of attention from the market in 2021. Many companies have entered the Metaverse, including well-known international technology companies such as Meta, Microsoft, Nvidia, and more. In addition to international giants, domestic giants also have different levels of layout, such as Tencent , ByteDance, Alibaba and so on.
Today, the development of the metaverse is still under exploration, and there is still no exact definition, but we can identify some key technologies of the metaverse, including computing technology, display technology, transmission technology, and modeling technology.
computing technology
The operation mainly depends on two factors, one is the hardware (chip), and the other is the algorithm or the way of operation. The way the metaverse operates is largely dependent on cloud computing. Because the display equipment should not be too large, this determines that the display equipment cannot undertake the calculations required by the Metaverse alone. At this time, cloud computing comes in handy.
The construction of the metaverse requires huge computing power, which will also indirectly promote the development of chips. AMD CEO Su Zifeng said, "We are in a period of high-performance computing explosion, which is driving the need for more computing to support the services and devices that affect every aspect of our lives.
On November 9, 2021, NVIDIA released a major business update announcement to the market at the GTC Global Technology Summit, that is, NVIDIA will provide basic technical services for the Metaverse in the future, such as chips. AMD, which is also engaged in the development of graphics graphics cards (GPUs), has recently begun to serve the Metaverse Trailblazers. On November 9, AMD announced that Meta (formerly known as Facebook) has become its business partner. In the future, Meta will purchase AMD chips to meet the needs of Meta for data centers and computing power after Meta’s transformation from a “social media” company to a “metaverse” company. huge demand.
The rise of VR/AR products will also drive the development of the most core component, the chip. Similar to car chips, today's VR/AR chips have emerged from the mobile phone and become independent and customized.
Previously, due to the niche of VR/AR products, few technology companies designed chips for them. For example, in 2018, Oculus and Xiaomi jointly launched the VR all-in-one product Oculus Go (known as Xiaomi VR all-in-one in China).
In addition, the screen of the Oculus Rift DK2 in 2014 uses the screen of the Samsung Galaxy Note 3. It can be seen that in the early days, VR/AR hardware mainly relied on the supply chain related to mobile phones.
However, as the market expands, the technology chain surrounding VR/AR products will also have the opportunity to flourish, especially chips. In 2018, Qualcomm released the XR1 chip, a chip designed for VR/AR products. Today's popular Oculus Quest 2 uses the Qualcomm XR2 chip (the second generation of the XR1). The big sale of Oculus Quest 2 has also prompted Qualcomm to gradually make efforts in the VR/AR technology chain. It is rumored that Qualcomm has set up a special department after the success of XR2 and the team continues to design chips for VR / AR devices.
On the other hand, technology giants like Apple, Meta, Magic Leap, etc. are likely to continue in the direction of self-development and customization. According to foreign media reports, Apple has completed the design of VR/AR chips in 2020, and commissioned TSMC to produce chips with a 5nm advanced process. From Qualcomm to Apple, they have devoted their efforts to the core field of chips. It is foreseeable that the chips of VR/AR products will usher in a period of accelerated development.
The operation of the virtual world is inseparable from powerful computing power, and cloud computing will be fully developed because people continue to build the metaverse. The explosion of data volume in the metaverse has led to a surge in computing power demand, and cloud computing is considered one of the most important infrastructures in the metaverse. It is unrealistic to rely on the computer equipment of ordinary people to run the huge code of the metaverse, and cloud computing is a good solution.
To connect into the metaverse, we need to input "parameters", that is, where to go, which commands to issue, etc., just like typing commands on a keyboard in a game. The parameters that people enter into the Metaverse are far more complicated than the instructions generated by typing on the keyboard. The sensor equipment collects our "input parameters", throws them into the cloud for calculation, and then converts them into parameters that the Metaverse can understand. Let the virtualized us in the metaverse move or give commands.
After the instructions have been conveyed, the next step is the modeling part of the metaverse. All objects, buildings, etc. in the metaverse are virtualized. This involves the modeling part. Nvidia once interspersed a few seconds of "fake" Huang Renxun and background in its online GTC conference. More than 30 staff were required to scan Huang Renxun using RTX ray tracing technology, take thousands of photos of Huang Renxun and the kitchen from various angles, model the "kitchen" in Omniverse, a virtual collaboration platform developed by NVIDIA, and finally combine it with AI to make the fake look real. Just building a kitchen requires a lot of data to generate and simulate, let alone a world. If you want to build a realistic and huge metaverse world, you need powerful cloud technology to support powerful simulation capabilities.
Humans need to get a certain amount of "feedback" to connect to the metaverse. After a certain amount of computing, cloud technology sends the "feedback" back to the device at hand, thus giving us certain perceptual feedback.
Many technology companies are also actively deploying related facilities. For example, Unity recently launched the Unity cloud distributed computing solution. According to Unity's official statement, this solution can save up to 70% of computing time. While improving the overall computing efficiency, it also reduces the loss of local computing resources and greatly saves costs.

From a computing point of view, the future of the Metaverse will be a "cloud-edge-device" collaboration model. However, at present, whether it is the cloud or the terminal, the computing power reserves of mainstream chips are far from meeting the requirements of Metaverse applications. In particular, the computing power bottleneck of the terminal side is huge, because the terminal side not only undertakes some intelligent perception algorithms, but more importantly, undertakes the core virtual-real fusion photorealistic image rendering algorithm. This kind of algorithm requires huge computing power and demands Ultra-low power consumption, the current mainstream end-side computing chips (chips on display devices) do not meet such strict technical indicators.
Bloomberg previously predicted that the Metaverse would become an $800 billion market by 2024. Many of these infrastructures will reach a $300 billion market. Since computing-related technology is one of the most important elements of the metaverse, it is conservatively estimated to be a $100 billion market. This has also prompted many companies such as Nvidia, AMD, Unity and other companies to actively deploy related businesses.

Demonstration technology
The technology to show the metaverse to people is inseparable from virtual reality (VR/AR). As an important bridge connecting the real world and the metaverse, this technology has now attracted the attention and entry of many people, including many well-known big technology companies.
Meta, the "All in" metaverse, earlier launched its Oculus lineup. The Oculus Quest 2 launched in 2020 has been rated as the most worthwhile virtual reality device by many well-known technology websites, and its price is only $299. The Oculus Quest 2 is arguably the most affordable virtual reality device available today. The launch of the Oculus Quest 2 also helped Meta capture more than 75% of the market (shipments in Q1 2021).
In addition to Meta, Microsoft also has its own virtual reality device called HoloLens, which is now in its second generation. The second-generation H oloLens is priced at $3,500, which is far behind Meta's Oculus Quest 2 in terms of price/performance.
Apple is also actively working on a virtual reality device, which is expected to launch in 2022. Apple, which is good at combining software and hardware, is bound to be a formidable opponent when it enters this field in the future. In May 2021, Apple also acquired virtual reality company NextVR for $100 million to enhance its VR capabilities in entertainment and sports.
Ma Jie, vice president of Baidu, said at the XR International Forum that Baidu will support the development of Metaverse and VR technology in the future. On October 19, 2021, Liu Liehong, chairman of China Unicom, said at the 2021 World VR Technology Conference that China Unicom will promote the accelerated development of VR technology.
According to IDC data, global VR device shipments are accelerating. It is expected that global VR headset shipments will maintain a compound annual growth rate of 41% from 2021 to 2025. It is expected that shipments in 2021 will increase from 6.7 million units in 2020. It will be increased to 8.5 million units, and will reach 28.6 million units in 2025.
The metaverse is defined as an immersive experience for users, and without the technology on display, the metaverse is an empty shell for humans. Certain technologies are waiting for the Metaverse to provide better, further virtual experiences, such as virtual meetings, virtual shopping, virtual fittings, and more. By allowing shoppers to overcome the barriers of online shopping, they can choose the best products for them from the comfort of their home.
But today's hurdle is mostly the high price of devices, such as VR headsets, which aren't the most affordable option for most people. The most accessible device today is Oculus' Quest 2 device, which also costs $299 (about 1,892 yuan). With the improvement of display technology, the price of related equipment will gradually drop to the range that ordinary people can afford.
The current weakness of display technology lies in the fact that both display devices and image processing and rendering algorithms cannot fully meet the technical requirements of Metaverse applications. Most of the mainstream AR/VR display devices on the market have problems such as heavy weight, high power consumption, poor resolution, serious color cast, and often accompanied by dizziness. To obtain a light and good display module, materials are required. A major breakthrough in science and optics. Display technology still needs a certain amount of time to settle and develop.
Communication Technology
An important element of the metaverse is being able to connect with people. Metaverse is based on VR/AR technology to achieve interactive experience, but VR/AR technology needs to consume a lot of data, so it is a good solution to transmit the data to cloud computing and then feed it back to the device. However, today's technical difficulty lies in realizing low-latency connections, which puts forward higher requirements for communication technology, especially 5G.
Previous Oculus headsets (like the Oculus Go) were basically paired with a smartphone to render and compute their worlds, making them heavy, hot, and less graphic than what a PC or console could offer many. Although the latest Oculus Quest 2 does not need to be connected to a mobile phone, it can calculate by itself, but the texture is still rough. Obviously, it is unrealistic for AR/VR devices to fully compute the rendering and operation of virtual worlds. We can use cloud computing to send important input parameters to the cloud, and then receive output parameters.
And with faster connections such as 5G, AR/VR devices will be able to render their graphics remotely, further sending the results to our AR/VR devices quickly for display, just like a display and sensor collection.
However, to achieve this, we need reliable latency under about 7 milliseconds and another technology called mobile edge computing. For this, we need independent 5G networks to implement this technology.
Mobile Edge Computing (MEC) enables cloud computing capabilities and an IT service environment at the edge of cellular networks. The basic idea behind MEC is that by running applications and performing related processing tasks closer to cellular customers, network congestion can be reduced and application performance improved. MEC technology is designed to be implemented at cellular base stations or other edge nodes and provide customers with flexible and rapid deployment of new applications and services.
5G has been very hot in the past two years, and many people have discussed its application scenarios. Metaverse will be an important application scenario for 5G to show its strengths. 5G, 6G and other related communication technologies will be greatly developed under the trend of the Metaverse, and the Metaverse will also provide people with a more realistic virtual experience due to the rapid development of 5G, 6G and other related communication technologies.
The more bandwidth value that Metaverse computing obtains through 5G and more advanced communication technology means that the rendering of the metaverse virtual world can be done on edge devices and transmitted to the user's VR/AR device, which means that with the continuous development of communication technology Progress, the size of VR/AR devices may shrink further in the coming years, making it more comfortable for users to use.
modeling techniques
To be a truly immersive platform, the metaverse needs a three-dimensional environment. Building the metaverse is inseparable from modeling, whether it's a scene, surrounding objects, or avatars. To realize this vision, developers still have a long way to go. After all, the construction of a huge 3D virtual world requires massive and high-quality 3D content support to restore the real world.
The mainstream way of modeling today is manual modeling with 3D software. At present, there are a large number of professional 3D modeling software on the market, such as 3DMax, Maya, Blender, etc. Different software has a variety of modeling methods, including polygons, surfaces, parameters, inverse, etc., which are respectively suitable for 3D model creation of animation, games, interior design and other scenes. Its advantage is that the model built is of high precision and can be created arbitrarily by imagination. However, the learning cost of modelers is often high, the production is difficult, and the production cycle is long. Building the metaverse is a labor-intensive process if manually modeled using 3D software.
With the popularization of 3D modeling knowledge and the rapid development of technology, scenarios such as industry, 3D printing, and e-commerce all require the use of professional instruments to scan and model for model reconstruction. The main instrument types of this modeling method are laser scanners, light field scanners, etc. The so-called laser scanning modeling refers to ranging from the reflected signal of the laser equipment, and the depth data is calculated by the algorithm; the light field scanning modeling is to use multi-camera array photography, or use professional light field equipment to scan and model. The biggest advantage of using professional instrument scanning modeling is that it can simulate the reflection characteristics consistent with the surface of the object, restore the color, texture and gloss of the real object material to the greatest extent, and improve the fidelity of 3D object rendering.
The speed of professional instrument scanning modeling is much faster than manual modeling, the operation is less difficult, and high-precision modeling can be achieved. But its disadvantage is that the more sophisticated the equipment, the higher the cost, and it can only be reconstructed based on real objects, and cannot be created virtually. It also has certain restrictions on the size of real objects, and it still needs professional 3D software to edit and process after modeling before it can be put into use.
At present, the most active layout modeling field in the market is Unity. Unity was originally a company focused on open game engines. Later, it gradually developed into a platform for developing and operating interactive real-time 3D content, not only for game production, but also for many industries such as film, design, manufacturing, and architecture.
On November 10, 2021, Unity officially acquired the visual effects company Weta Digital. Unity said the ultimate goal of the acquisition is to put Weta Digital's many advanced visual effects tools into the hands of millions of creators and artists around the world. Integrated into the Unity platform, these tools will go a long way toward enabling a new generation of real-time 3D creative content and shaping the future of the Metaverse. Unity is also developing AI to help people lower the threshold for related modeling.
There is currently no low-cost, low-threshold and high-efficiency modeling method for modeling in the market today. Building the Metaverse must find an efficient and low-cost method, which will also promote the development of modeling technology, whether it is related software, or the technology behind it, the engine or the training of related personnel.
Edwin
No comments yet