<?xml version="1.0" encoding="utf-8"?>
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/">
    <channel>
        <title>taptap</title>
        <link>https://paragraph.com/@tiptap</link>
        <description>undefined</description>
        <lastBuildDate>Tue, 07 Apr 2026 03:01:39 GMT</lastBuildDate>
        <docs>https://validator.w3.org/feed/docs/rss2.html</docs>
        <generator>https://github.com/jpmonette/feed</generator>
        <language>en</language>
        
        <copyright>All rights reserved</copyright>
        <item>
            <title><![CDATA[People taking obesity drugs Ozempic and Wegovy gain weight once they stop medication]]></title>
            <link>https://paragraph.com/@tiptap/people-taking-obesity-drugs-ozempic-and-wegovy-gain-weight-once-they-stop-medication</link>
            <guid>HPHcldco2whngU3x4fsX</guid>
            <pubDate>Fri, 16 Feb 2024 10:08:38 GMT</pubDate>
            <description><![CDATA[Patients taking blockbuster obesity drugs Ozempic or Wegovy will pack the pounds back on after they stop taking the medications. “I think this is what we see when people go on diets or different exercise regimens, similar to when they go on a pharmacological treatment,” Karin Conde-Knape, Novo Nordisk’s senior vice president of global drug discovery, said in an interview at CNBC’s Healthy Returns Summit on Wednesday. “As long as you’re keeping your intake the same, your output the same, you’r...]]></description>
            <content:encoded><![CDATA[<p>Patients taking blockbuster obesity drugs Ozempic or Wegovy will pack the pounds back on after they stop taking the medications.</p><p>“I think this is what we see when people go on diets or different exercise regimens, similar to when they go on a pharmacological treatment,” Karin Conde-Knape, Novo Nordisk’s senior vice president of global drug discovery, said in an interview at CNBC’s Healthy Returns Summit on Wednesday. “As long as you’re keeping your intake the same, your output the same, you’re able to control your weight. But if you go out of this, you will immediately start to come back.”</p><p>Conde-Knape said rates of weight gain after stopping Wegovy vary depending on the individual, adding that “some will come back earlier, some will come later.” Novo Nordisk makes both prescription drugs.</p><p>She said available data suggests most individuals will recover most of their weight within five years of stopping an obesity drug, and roughly 50% of their weight after two to three years. Some individuals may actually gain more weight after stopping an obesity drug than they initially lost, Conde-Knape added. Studies have similarly shown weight rebound in people who stop taking Ozempic.</p><p>She said it’s tied to how the drugs work. They mimic a hormone produced in the gut called GLP-1, which signals to the brain when a person is full. She called that a “direct effect on satiety,” and noted that the drugs can also control what type of food people crave.</p><p>But she said GLP-1 doesn’t rewire “your neural networks to really define a new body weight setpoint.” So any weight loss may not be permanent, according to Conde-Knape.</p><p>The Danish pharmaceutical company still needs to conduct more investigations and clinical trials to understand what drives those rates of weight gain, “but what is critically important is that definitely you need to stay,” Conde-Knape said.</p><p>Her remarks come after Ozempic and Wegovy catapulted to the U.S. national spotlight in recent years for being “weight loss miracles” in a nation obsessed with body image. In clinical trials, Wegovy was shown to decrease body weight by around 15%.</p><p>Hollywood celebrities, social media influencers and even billionaire tech mogul Elon Musk have reportedly used the popular pen-shaped injections to get rid of unwanted weight.</p><p>Wegovy has flown off shelves since gaining Food and Drug Administration approval for “chronic weight management” in June 2021. So has Ozempic, which was first authorized to treat diabetes and is now being used off-label for weight loss. That popularity sparked widespread shortages last year and prompted Novo Nordisk to ramp up production of Wegovy.</p><p>The shortage and other factors like out-of-pocket costs without insurance or unpleasant side effects have forced some people to stop taking Ozempic or Wegovy. That’s left many complaining of that rebound in weight that’s difficult to control.</p><p>Conde-Knape said so far data indicate that weight loss is maintained with long-term use of the drugs. But the company’s data only examines use for two to three years maximum.</p><p>“We’ll need to see how much more with the longer duration of treatment, how much more will people will be able to achieve,” she said.</p>]]></content:encoded>
            <author>tiptap@newsletter.paragraph.com (taptap)</author>
        </item>
        <item>
            <title><![CDATA[Big models "slimmed down" into cell phones, the next iPhone moment is coming?]]></title>
            <link>https://paragraph.com/@tiptap/big-models-slimmed-down-into-cell-phones-the-next-iphone-moment-is-coming</link>
            <guid>3DV2IrX9xDmJjeAhuuae</guid>
            <pubDate>Wed, 09 Aug 2023 06:41:54 GMT</pubDate>
            <description><![CDATA[A wave of "end-side big models" is coming. Huawei, Qualcomm and other chip giants are exploring the implantation of AI big models into the end-side, so that cell phones can realize a new generation of species evolution. Compared with AI applications such as ChatGPT and Midjourney, which rely on cloud servers to provide services, the end-side big model focuses on realizing intelligence locally. Its advantage is that it can better protect privacy, while allowing the phone to become the user&apo...]]></description>
            <content:encoded><![CDATA[<p>A wave of &quot;end-side big models&quot; is coming. Huawei, Qualcomm and other chip giants are exploring the implantation of AI big models into the end-side, so that cell phones can realize a new generation of species evolution.</p><p>Compared with AI applications such as ChatGPT and Midjourney, which rely on cloud servers to provide services, the end-side big model focuses on realizing intelligence locally. Its advantage is that it can better protect privacy, while allowing the phone to become the user&apos;s private intelligent assistant through learning, and also not have to worry about problems such as cloud server downtime.</p><p>However, under the existing technical conditions, the performance of cell phones is far from enough to support the operation of large models. The mainstream technical solutions in the industry is, through pruning, quantization, distillation to the large model &quot;slimming&quot;, in as little as possible under the premise of loss of precision, reduce the resources it requires and energy consumption.</p><p>Qualcomm has begun to develop chips for end-side big models. This signals that cell phone terminals deploying AI models are coming to us.</p><p>Handset manufacturers will lead the big model to the mobile side</p><p>Big AI models are making a mad dash from the cloud to smart terminals.</p><p>On August 4, at the 2023 Huawei Developer Conference, Huawei released HarmonyOS 4, which, compared to previous generations of operating systems, has the most significant change of building AI big model capabilities into the bottom layer of the system. Huawei is kicking off the prologue of the AI model to the &quot;intelligent terminal side&quot;.</p><p>Currently, the services provided by AI applications such as ChatGPT and Midjourney are basically done through cloud servers. Taking ChatGPT as an example, the big model and computing resources behind it are stored on a remote server, and the user interacts with the server in real time, and the text being input is processed by the server to get a return response. The benefit of this is that it ensures efficient and stable operation of the model, as the server is usually configured with powerful computing resources and can be scaled up at any time to accommodate high loads.</p><p>Today, new support logic is emerging. Huawei is trying to introduce the big model into the terminal, which means that all of the above work can be done locally, and the cell phone system itself has certain AI capabilities, so it does not need to access the AI cloud service to realize intelligent upgrades.</p><p>Huawei executive director and terminal BG CEO Yu Chengdong introduced HarmonyOS 4, which is supported by Huawei&apos;s Pangu big model as the underlying layer, hoping to bring users a new AI experience change of intelligent terminal interaction, high-level productivity efficiency, and personalized services.</p><p>The AI capabilities of HarmonyOS 4 are now mainly embodied by Huawei&apos;s intelligent assistant &quot;Xiaoyi&quot;. With the access to large models, Xiao Yi expands the input of text, images, documents and other forms of input on the basis of voice interaction, and its natural language comprehension ability has been improved. Xiao Yi can also connect to a variety of services and scenarios based on commands, such as automatically extracting image text, generating various types of business email content, or generating images.</p><p>More important change is that the small art has a memory and learning ability, with continuous use, it will become more and more understanding of the &quot;master&quot;, able to intelligently give travel, activity plans and other programs, and according to the user&apos;s habits, to achieve personalized recommendations. Huawei revealed, small art of these new capabilities, will be opened in late August public test experience.</p><p>AI big model built into the bottom layer of the cell phone system, Huawei hopes to use this to enhance the degree of intelligence of the phone across the board. Although the above functions of small art is not &quot;profound&quot;, but to realize it, users often need to call ChatGPT, Midjourney and many other applications to complete. When the phone itself has AI capabilities, it is like a more versatile assistant, providing comprehensive services.</p><p>Before the release of HarmonyOS 4, Huawei had actually tried to access the big AI model to the mobile terminal. In March this year, Huawei released the P60 cell phone, which comes with a smart search function based on multimodal big model technology, which realizes the operation of the natural language model on the cell phone side by doing miniaturization of the model on the cell phone side.</p><p>Huawei is not the first to introduce AI models to the end side. At the 2023 World Conference on Artificial Intelligence, Qualcomm demonstrated the operational practice of the large model into the end side, and placed the generative AI model Stable Diffusion on the cell phone equipped with the second generation of Snapdragon 8 to run, and performed 20 steps of reasoning in 15 seconds and generated a 512x512 pixel image, and the image effect was not significantly different from the level of cloud processing.</p><p>During MWC 2023 in Shanghai, Zhao Ming, CEO of Glory, also said that Glory will promote the deployment of end-side big models on the smartphone side, so as to realize the experience of multimodal natural interaction, accurate intent recognition, and closed-loop services for complex tasks.</p><p>Apple is also in the spotlight, a month ago, Apple was revealed to be secretly developing &quot;Apple GPT&quot;, which is an artificial intelligence tool based on Apple&apos;s self-developed Ajax framework. Although the specific details have yet to be disclosed, but the industry generally speculates that Apple is likely to add a large model in the system layer to enhance the voice assistant Siri&apos;s intelligence, so that Siri off the &quot;artificial retardation&quot; hat.</p><p>Hype or new revolution?</p><p>It&apos;s not unusual for cell phone manufacturers to focus on big models, but why are they taking the &quot;end-side&quot; route? After all, Huawei Xiaoyi&apos;s interactive and generative capabilities can also be provided through cloud servers, and it seems to be more cost-effective and easier to realize the technology.</p><p>AI big model into the smart mobile, is it hype or is it really necessary? On this issue, both Yu Chengdong and Zhao Ming mentioned two key words: privacy security and personalization.</p><p>Yu Chengdong emphasized that Huawei advocates that the first principle of all AI experience innovations and scenario design is security and privacy protection, to create a more responsible AI, and promises that AI-generated content will be marked.</p><p>Compared to processing data in the cloud, the most obvious advantage of the smart terminal side is privacy and security. Previously, ChatGPT was repeatedly caught in data leakage storms. In March this year, Samsung issued an internal ban on the use of ChatGPT, caused by semiconductor employees suspected of leaking company secrets due to the use of ChatGPT; last month, OpenAI, the company behind ChatGPT, and its shareholder, Microsoft, were anonymously sued by 16 people for up to $3 billion, accusing them of using and leaking personal privacy data without permission.</p><p>When data processing is all on the end-side, users&apos; personal data will not be uploaded to cloud servers, dramatically reducing the risk of privacy leaks. This also provides a prerequisite for cell phone AI assistants to truly become life stewards - only when privacy is guaranteed will users feel comfortable handing over their data to AI to learn.</p><p>In Zhao Ming&apos;s understanding, the mission of the end-side AI big model is to better understand the user, &quot;knowing what time I go to bed, knowing what I like to eat, and being able to solve my immediate needs, which is equivalent to having the ability to gain insight into my needs.&quot; To do this, AI needs to be trained based on the user&apos;s personal data and habits, and eventually the smartphone will hopefully become an all-purpose assistant, or a personal robot secretary, able to help users solve the needs of multiple scenarios such as dining, booking, counseling, entertainment, and office.</p><p>In contrast, both ChatGPT and other mainstream AI applications are standardized products, and it is difficult to have the ability of a personal assistant without modification; it does not understand the user, but only responds accordingly to the user&apos;s input instructions. The personal phone is already a private personal intelligent device, if the AI model that understands human language can run on the phone, the degree of intelligence will undoubtedly be greatly improved.</p><p>In addition, applications relying on the cloud also have instability, for example, due to network or server reasons, the cloud&apos;s response speed may become slower, or even simply down, which has already appeared many times on ChatGPT, localized large models will greatly weaken the dependence on the cloud, thus avoiding &quot;cloud lag&quot;.</p><p>Based on the above characteristics, the big model of the &quot;end-side revolution&quot; shows the potential, and even has the hope that the development of cell phones into the bottleneck for many years to make another exciting evolution of species, just like when the emergence of large-screen smartphones and the iPhone release moment.</p><p>But large models want to play in the cell phone side of the strength of the existence of an obvious problem: cell phone chip can withstand? As large models can easily contain tens of billions, hundreds of billions of parameters, and the need for astronomical training, consuming huge arithmetic power, the performance of existing cell phone chips obviously can not meet the requirements.</p><p>In this regard, the industry&apos;s current mainstream solution is &quot;model miniaturization&quot;.</p><p>Simply put, when the model network structure is determined, as little as possible to reduce the accuracy of the premise, to the model &quot;slimming&quot;, so as to reduce the resources it requires and energy consumption. There are usually three steps in this process: trimming out the parameters in the model that have a very small impact on the accuracy is called &quot;pruning&quot;; using lower accuracy data types for inference is called &quot;quantization&quot; in the jargon; and extracting a similar but simpler model from the complex model is figuratively called &quot;thinning&quot;. From the complex model, extract a model with similar effect but simpler, is figuratively called &quot;distillation&quot;. The ultimate goal is to reduce the size of the model.</p><p>On the other hand, Qualcomm and other chip makers are also developing specialized chips for end-side deployment of AI models. Previously, the first integrated AI-specific Hexagon processor for Qualcomm&apos;s 5G mobile platform Snapdragon 8 Gen2, which uses an independent dedicated power supply system to support micro-slicing inference, INT4 precision and Transformer network acceleration, can provide higher performance while reducing energy consumption and memory footprint.</p><p>The end-side big model is setting off a new generation of smart terminal revolution. idc predicts that by 2026, nearly 50% of terminal devices in the Chinese market will have processors with AI engine technology. another huge change brought by AI to human technological life may be coming.</p>]]></content:encoded>
            <author>tiptap@newsletter.paragraph.com (taptap)</author>
        </item>
        <item>
            <title><![CDATA[AI battles for supremacy, Google hoards "data"]]></title>
            <link>https://paragraph.com/@tiptap/ai-battles-for-supremacy-google-hoards-data</link>
            <guid>9kGKhwqzfOnE4RFONpaF</guid>
            <pubDate>Tue, 04 Jul 2023 15:28:11 GMT</pubDate>
            <description><![CDATA[Data, one of the three key elements of AI technology development, has been the focus of the tech giants&apos; "battle" in this "war of the gods" over AI. On July 1, Google updated its privacy policy, making it clear that the company reserves the right to access content posted online by users to train its AI tools. The update to Google&apos;s privacy policy reads as follows: Google will use information to improve our services and develop new products, features, and technologies that benefit ou...]]></description>
            <content:encoded><![CDATA[<p>Data, one of the three key elements of AI technology development, has been the focus of the tech giants&apos; &quot;battle&quot; in this &quot;war of the gods&quot; over AI.</p><p>On July 1, Google updated its privacy policy, making it clear that the company reserves the right to access content posted online by users to train its AI tools.</p><p>The update to Google&apos;s privacy policy reads as follows:</p><p>Google will use information to improve our services and develop new products, features, and technologies that benefit our users and the public. For example, we will use publicly available information to help train Google&apos;s AI models and build products and features such as Google Translate, Bard, and Cloud AI.</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/4b736ca13e6339faa0971520c026bf8a19aac8c15d267e905485be038ab20080.png" alt="" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="hide-figcaption"></figcaption></figure><p>By way of comparison, in the previous version of Google&apos;s privacy policy, Google only mentioned that the data would be used for &quot;language models&quot; and not &quot;artificial intelligence models.</p><p>Media analysis suggests that this privacy policy provision is a significant departure from previous policies, which typically described how companies would use information posted by users on the company&apos;s own servers. But in this one clause, Google appears to reserve the right to collect and use all data posted on public platforms as if the entire Internet were the company&apos;s own AI playground.</p><p>Previously, although anyone could see what was publicly posted online, the way that information was being used was changing. The public&apos;s focus on data is shifting from who has access to it to how it is used.</p><p>Google&apos;s Bard and ChatGPT may have long ago used blog posts you&apos;ve forgotten about or restaurant reviews from years ago to train themselves. Google did not immediately comment on the public&apos;s concerns about privacy.</p><p>Google shows good faith to &apos;water sellers&apos;</p><p>In addition to Google&apos;s users, data providers are the ones Google has to &quot;please&quot; if it wants to hoard &quot;data&quot;.</p><p>Data providers are seen as the &quot;water sellers&quot; in the AI era.</p><p>Musk didn&apos;t want to be AI data whoring, restricting access and causing Twitter to go down. The same is true of Reddit, the U.S. posting bar that doesn&apos;t want to be whoring data, and paid APIs have come in. Directly led to several very popular third-party Reddit app offline. This shows the AI era &quot;water sellers&quot; to their own &quot;water&quot; protection.</p><p>And Google has taken the lead in showing its goodwill to the &quot;water sellers&quot;. The data of large news publishers is naturally the first focus.</p><p>In recent months, the debate around AI copyright issues has never stopped, exacerbating the already tense relationship between large technology companies and the publishing community. And Google has taken the lead in stating that it is willing to pay for news content.</p><p>Citing a newspaper group executive, the media said Google has worked out a deal to pay for news content in the future:</p><p>AI fight, Google hoarding &quot;data&quot; &quot;Google has developed a licensing agreement, they are willing to accept the principle that they need to pay to buy content, but we have not yet discussed the amount. The Google side said that negotiations on the amount will take place in the coming months, which is the first step.&quot; In response to the report, Google clarified that the reports of a licensing agreement were &quot;inaccurate,&quot; adding that &quot;it&apos;s still early days and we&apos;re continuing to work with the ecosystem, including news publishers, to get their input.&quot;</p><p>According to Google, they are in &quot;ongoing dialogue&quot; with news organizations in the U.S., U.K. and Europe, while their AI tool Bard is also being trained to &quot;make information publicly available,&quot; which could include sites that require payment.</p>]]></content:encoded>
            <author>tiptap@newsletter.paragraph.com (taptap)</author>
        </item>
        <item>
            <title><![CDATA[The Artificial Intelligence Regulatory Conundrum: How to "Beat Magic with Magic"]]></title>
            <link>https://paragraph.com/@tiptap/the-artificial-intelligence-regulatory-conundrum-how-to-beat-magic-with-magic</link>
            <guid>9823SW9rw9n3joA1Ig4a</guid>
            <pubDate>Tue, 04 Jul 2023 15:23:10 GMT</pubDate>
            <description><![CDATA[The global AI legislative process is clearly accelerating, with regulation around the world catching up with the speed of AI evolution. On June 14, the European Parliament passed the draft negotiating mandate of the AI Act with 499 votes in favor, 28 votes against and 93 abstentions. In accordance with EU legislative procedures, the European Parliament, EU member states and the European Commission will begin "tripartite negotiations" to determine the final terms of the Act. The European Parli...]]></description>
            <content:encoded><![CDATA[<p>The global AI legislative process is clearly accelerating, with regulation around the world catching up with the speed of AI evolution.</p><p>On June 14, the European Parliament passed the draft negotiating mandate of the AI Act with 499 votes in favor, 28 votes against and 93 abstentions. In accordance with EU legislative procedures, the European Parliament, EU member states and the European Commission will begin &quot;tripartite negotiations&quot; to determine the final terms of the Act.</p><p>The European Parliament said it is &quot;ready to negotiate the first ever AI Act&quot;. U.S. President Joe Biden signals control of AI, and a U.S. congressman submits proposed AI regulation legislation. U.S. Senate Democratic Leader Chuck Schumer presented his &quot;Framework for AI Security Innovation&quot; and plans to develop a federal-level AI bill in just &quot;a few months.</p><p>The company is also on the agenda, with a draft AI law ready to be submitted to the Standing Committee of the National People&apos;s Congress for consideration this year. on June 20, the first batch of domestic deep synthesis service algorithms for the record list has also been released, Baidu, Alibaba, Tencent and other 26 companies, a total of 41 algorithms on the list.</p><p>Although China, the United States and the European Union all advocate the principle AI regulatory concepts of accuracy, security and transparency, there are many differences in the specific ideas and approaches. The enactment of comprehensive AI laws is behind the export of their own rules, and the desire to grasp the rules advantage.</p><p>Some domestic experts have called for legal regulation of AI as soon as possible, but the current realistic difficulties faced cannot be ignored. There is also an important consideration: to regulate or to develop. This is not a dichotomous choice, but in the digital field, balancing the two is quite difficult.</p><p>EU Sprint, China and US Speed Up</p><p>If all goes well, the Artificial Intelligence Bill passed by the European Parliament is expected to be approved by the end of this year. The world&apos;s first comprehensive AI regulatory law is likely to land in the EU.</p><p>&quot;The draft will influence other countries that are on the fence to accelerate their legislation. It has always been controversial whether AI technology should be included in the rule of law regulation. Now it seems that after the Artificial Intelligence Bill is landed, relevant network platforms, such as those whose business content is mainly generated by user information, will inevitably assume a higher obligation to review.&quot; Zhao Jingwu, an associate professor at Beijing University of Aeronautics and Astronautics Law School, told China Newsweek.</p><p>As part of its digital strategy, the EU hopes to comprehensively regulate AI through the AI Bill, and the strategic layout behind it has been put on the table.</p><p>Peng Xiaoyan, executive director of Beijing Wanshang Tianqin (Hangzhou) Law Firm, told China Newsweek that the Artificial Intelligence Act applies not only to the EU, but also to system providers or users located outside the EU, but whose system output data is used in the EU. It greatly expands the scope of jurisdictional application of the bill, and also gives a glimpse of the end of the jurisdictional scope of the data element seizure.</p><p>In the article &quot;The World&apos;s First AI Legislation: The Difficult Balance between Innovation and Regulation&quot;, Jin Ling, a researcher and deputy director of the European Institute of the Chinese Academy of International Studies, also wrote that the AI Bill highlights the moral advantages of AI governance in the EU, which is another attempt of the EU to exert its normative power and make up for the technical shortcomings through the advantage of rules. This reflects the EU&apos;s strategic intent to seize the moral high ground in the field of AI.</p><p>The AI Act has been in the making for two years, and in April 2021, the European Commission proposed AI legislation based on a &quot;risk classification&quot; framework, which has since been discussed and revised in several rounds. After the popularity of generative AI such as ChatGPT, EU lawmakers urgently added another &quot;patch&quot;.</p><p>In a new twist, the latest draft of the AI Bill strengthens transparency requirements for general purpose AI. For example, generative AI based on basic models must label the generated content to help users distinguish between deep falsification and real information, and to ensure that illegal content is prevented from being generated. Providers of base models like OpenAI, Google, and others that use copyrighted data during the training of their models are also required to disclose details of the training data.</p><p>In addition, real-time remote biometrics in public places has been changed from a &quot;high risk&quot; level to a &quot;prohibited&quot; level, meaning that AI technology cannot be used for face recognition in public places in EU countries.</p><p>The latest draft also further increases the penalties for violations, from a maximum of €30 million or 6% of the infringing company&apos;s global turnover for the previous fiscal year to a maximum of €40 million or 7% of the infringing company&apos;s global annual turnover for the previous year. This is quite a bit higher than the maximum fine of 4% of global revenue or €20 million under Europe&apos;s landmark data security law, the General Data Protection Regulation.</p><p>Peng Xiaoyan told China Newsweek that the increase in penalty amount side-by-side reflects the EU authorities&apos; determination and strength to regulate artificial intelligence. For Google, Microsoft, Apple and other technology giants with hundreds of billions of dollars in revenue, fines could reach tens of billions of dollars if they violate the provisions of the Artificial Intelligence Act.</p><p>And across the pond in the United States, as Washington was busy responding to calls from Musk and others for stronger AI controls, President Joe Biden met with a group of AI experts and researchers in San Francisco on June 20 to discuss how to manage the risks of the new technology. Biden said at the time that while seizing AI&apos;s enormous potential, the risks it poses to society, the economy and national security need to be managed.</p><p>The context in which risk management has become a hot topic in AI is that the U.S. has not taken as tough a step towards AI technology as antitrust and has yet to introduce federal-level, comprehensive AI regulatory laws.</p><p>The U.S. federal government&apos;s first formal foray into AI regulation was in January 2020, when it released the AI Application Regulation Guide to provide guidance on regulatory and non-regulatory measures for emerging AI issues. the National AI Initiative Act of 2020, introduced in 2021, is more of a policy layout in the AI field, with AI governance and strong regulation still some distance away. A year later, the AI Bill of Rights Blueprint (the &quot;Blueprint&quot;), released by the White House in October 2022, provides a supportive framework for AI governance, but is not an official U.S. policy and is not binding.</p><p>Little progress has been made on U.S. AI legislation, which has already drawn much discontent. Many have criticized that the U.S. has fallen behind the EU and China in terms of rule-making for the digital economy. However, perhaps seeing that the EU AI Act is about to pass its final &quot;hurdle,&quot; the U.S. Congress has recently shown signs of legislative acceleration.</p><p>On the day of Biden&apos;s AI meeting, Democratic Representatives Ted W. Lieu and Anna Eshoo, along with Republican Representative Ken Buck, submitted a proposal for the National Artificial Intelligence Council Act. Meanwhile, Democratic Senator Brian Schatz (D-NY) will introduce companion legislation in the Senate, focusing together on AI regulatory issues.</p><p>According to the bill, the AI Commission will consist of a total of 20 experts from government, industry, civil society and computer science, and will review current U.S. approaches to AI regulation and work together to develop a comprehensive regulatory framework.</p><p>&quot;AI is doing amazing things in society. If left unchecked and unregulated, it can cause significant harm. Congress must not stand idly by.&quot; Ted Lieu said in a statement.</p><p>A day later, on June 21, Senate Democratic Leader Chuck Schumer (D-N.Y.) gave a speech at the Center for Strategic and International Studies (CSIS) to reveal his &quot;Framework for Secure Innovation in Artificial Intelligence&quot; (the &quot;AI Framework&quot;) &quot;) - which encourages innovation while advancing security, accountability, foundations and interpretability - echoes macro plans, including the Blueprint. He had proposed the framework in April, but the details were largely undisclosed at the time.</p><p>Behind the AI framework is one of Chuck Schumer&apos;s legislative strategies. In his speech, he said he wanted to develop a federal-level AI bill in just &quot;a few months. However, the U.S. legislative process is cumbersome, not only through the House and Senate to vote, but also after several rounds of hearings, taking a long time.</p><p>To speed up the process, Chuck Schumer plans to hold a series of AI insight forums as part of the AI framework, covering 10 topics, including innovation, intellectual property, national security and privacy, starting this September. He told the outside world that the insight forums will not replace congressional hearings on AI, but rather run in parallel so that the legislature can introduce policy on the technology in a matter of months rather than years. He predicted that U.S. AI legislation may not begin to see anything concrete until the fall.</p><p>In early June, the General Office of the State Council issued the State Council&apos;s 2023 legislative work plan, which mentioned that the draft artificial intelligence law and other preparations for consideration by the Standing Committee of the National People&apos;s Congress.</p><p>According to the provisions of China&apos;s &quot;Legislative Law&quot;, the State Council to the Standing Committee of the National People&apos;s Congress proposed draft laws, the chairman of the meeting decided to include in the agenda of the Standing Committee meeting, or first to the relevant special committee to consider, report, and then decided to include in the agenda of the Standing Committee meeting, the follow-up generally need to go through three deliberations before delivery to vote.</p><p>Since this year, many countries have speeded up AI legislation, which Peng Xiaoyan believes is the result of competition and technological development together to promote the heat.</p><p>&quot;The data element is increasingly becoming a national strategic element, and countries also hope to establish jurisdiction through legislation to seize the AI discourse. At the same time, ChatGPT and other iterations of AI technology updates, so that society sees new hope for the development of strong AI. The development of new technologies will inevitably bring new social problems and social contradictions that require regulatory intervention to adjust, and the development of technology has somehow promoted the renewal of legislation.&quot; Peng Xiaoyan said.</p><p>Divergence far more than convergence</p><p>China, the U.S. and the EU are the main drivers of global AI development, but there are some differences in AI legislation among the three.</p><p>The EU AI Act classifies the risk of AI applications into four tiers from the perspectives of use and function, etc. Regardless of the several rounds of draft amendments, &quot;risk classification&quot; remains the core concept of AI governance in the EU.</p><p><strong>Divergence far more than convergence</strong></p><p>China, the United States, and the European Union are the main drivers of global AI development, but there are some differences in AI legislation among the three.</p><p>The EU AI Act classifies the risk of AI applications into four tiers from the perspectives of use and function, etc. Regardless of the several rounds of draft amendments, &quot;risk classification&quot; remains the core concept of AI governance in the EU.</p><p>The top of the pyramid corresponds to an &quot;unacceptable&quot; risk to human security. For example, scoring systems that classify people based on their social behavior or personal characteristics would be banned altogether.</p><p>In the latest draft, the European Parliament has expanded the list of &quot;unacceptable risks&quot; to prevent AI systems from being intrusive and discriminatory. Six categories of AI systems, such as biometrics in public space, emotion recognition, predictive policing (based on profiling, location, or past criminal behavior), and randomly capturing facial images from the Internet, are banned altogether.</p><p>The second category is AI systems that negatively impact human safety or fundamental rights and would be considered &quot;high risk. For example, AI systems used in products such as aviation, automobiles, and medical devices, as well as eight specific areas that must be registered in the EU database, covering critical infrastructure, education, training, and law enforcement. Subject to AI regulations and prior conformity assessment, various &quot;high-risk&quot; AI systems will be authorized to comply with a series of requirements and obligations to enter the EU market.</p><p>In addition, AI systems that influence voters and election results, as well as recommendation systems used by social media platforms with more than 45 million users under the EU Digital Services Act, such as Facebook, Twitter and Instagram, will also be included in the high-risk list.</p><p>At the bottom of the pyramid are AI systems with limited risk, little or no risk. The former have specific transparency obligations and need to inform users that they are interacting with AI systems, while the latter have no mandatory requirements and are largely unregulated, such as applications like spam filters.</p><p>The AI Act is seen by many in the industry as having many sharp &quot;teeth&quot; because of its strict regulatory provisions. However, the bill also attempts to find a balance between strong regulation and innovation.</p><p>For example, the latest draft requires member states to establish at least one &quot;regulatory sandbox&quot; that can be used free of charge by SMEs and startups to test innovative AI systems before they are put into use in a supervised and secure scenario until they meet compliance requirements. The EU generally believes that the proposal will not only allow authorities to keep an eye on technological changes in real time, but also help AI companies to continue to innovate while reducing regulatory pressure.</p><p>According to Jin Ling in the aforementioned article, the EU&apos;s upstream governance approach requires companies to bear more upfront costs on the one hand, and affects their enthusiasm for investment because of uncertainty in risk assessment on the other. Thus, despite the Commission&apos;s repeated emphasis that AI legislation will support innovation and growth in Europe&apos;s digital economy, realistic economic analysis does not seem to share this conclusion. The bill reflects an inherent conflict in the EU between the difficulty of effectively balancing the promotion of innovation and the protection of rights.</p><p>The United States, like the EU and China, supports a largely risk-based approach to AI regulation that advocates accuracy, security, and transparency. However, in Zhao Jingwu&apos;s view, U.S. regulatory thinking is more focused on leveraging AI and promoting innovation and development in the AI industry, ultimately to maintain U.S. leadership and competitiveness.</p><p>&quot;Unlike the &apos;risk prevention and technology safety&apos; regulatory philosophy upheld by China and the EU, the U.S. focuses on commercial development first. Both China and the EU focus on AI technology application safety and security to avoid AI technology abuse to infringe on individual rights, while the U.S. is focused on industrial development as the regulatory focus.&quot; Zhao Jingwu said.</p><p>One study found that U.S. congressional legislation has focused primarily on encouraging and guiding government use of AI. For example, the U.S. Senate had introduced an AI innovation bill in 2021 that would require the U.S. Department of Defense to implement a pilot program to ensure it has access to the best AI and machine learning software capabilities.</p><p>Chuck Schumer, in his aforementioned speech, has identified innovation as the North Star, and his AI framework is about unlocking the vast potential of AI and supporting U.S.-led AI technology innovation. The Regulatory Guidance for AI Applications opens with a clear statement that it should continue to promote the advancement of technology and innovation. The ultimate goal of the National AI Initiative Act of 2020 is also to ensure that the U.S. remains a leader in global AI technology through increased investment in research and the creation of workforce systems.</p><p>Peng Xiaoyan said that from the perspective of guiding regulatory design, the U.S. legislation and institutional level on AI development is still in a weak regulatory posture, and the social level actively encourages the innovation and expansion of AI technology with an open attitude.</p><p>In contrast to the EU, which has more explicit investigative powers and comprehensive regulatory coverage, the US has adopted a decentralized approach to AI regulation, with some states and agencies advancing AI governance to a lesser extent. This has resulted in national AI regulatory initiatives that are very broad and principled.</p><p>For example, the Blueprint, a landmark event in U.S. AI governance policy, sets out five basic principles of safe and effective systems, prevention of algorithmic discrimination, protection of data privacy, notice and explanation, and human involvement in decision making, with no more detailed provisions.</p><p>According to Peng, the Blueprint does not set out specific implementation measures, but rather builds a basic framework for AI development in the form of principle regulations designed to guide the design, use and deployment of AI systems.</p><p>&quot;Specifications such as these are not mandatory, which is a consideration for the U.S. to support the development of the AI industry. At present, artificial intelligence is still in the emerging development stage, high-intensity regulation will inevitably limit the development of industry and innovation to a certain extent, and thus the United States in the legislation to maintain a relatively modest attitude.&quot; Peng Xiaoyan said.</p><p>&quot;Without laws giving agencies new powers, they will have to regulate the use of AI based on the powers they already have. On the other hand, by keeping the ethical principles related to AI less prescriptive, agencies can decide for themselves how to regulate and what use rights to reserve.&quot; This leaves federal agencies, led by the White House, both constrained and free, according to Carnegie analyst Hadrien Pouget.</p><p>The use and innovation-led philosophy of AI governance predestines the U.S. to have a less-than-stiff &quot;fist. Alex Engler, a fellow at the Brookings Institution, a leading U.S. think tank, notes that the EU and the U.S. are taking different approaches to regulating AI with social impact in education, finance, and employment.</p><p>In terms of specific AI applications, the EU&apos;s Artificial Intelligence Act has transparency requirements for chatbots, while there are no federal-level regulations in the United States. Facial recognition is considered an &quot;unacceptable risk&quot; in the EU, while the U.S. provides public information through the National Institute of Standards and Technology (NIST) Face Recognition Vendor Testing Program, but does not mandate rules.</p><p>&quot;The EU&apos;s regulatory scope not only covers a broader range of applications, but also sets more rules for these AI applications. The U.S. approach, on the other hand, is more narrowly limited to adapting current agency regulators to try to govern AI, and the scope of AI is much more limited.&quot; Alex Engler said that despite the existence of broadly identical principles, there is far more divergence than convergence in AI risk management.</p><p>Zhao Jingwu summarized the AI regulatory models in China, the EU and the US and found that China is limited by AI technology application scenarios, specifically targeting face recognition technology, deep synthesis, automated recommendations and other application scenarios to develop special regulatory rules. The EU is risk level oriented, based on whether the risk level of AI applications is an acceptable level. The U.S., on the other hand, is judging the legality of AI technology applications in the framework of the established traditional legal system.</p><p>In addition, the U.S. is focusing more attention on AI research and investing more money in it. Just in early May, the White House announced an investment of about $140 million to establish seven new national AI institutes. Some researchers believe that the U.S. move is a move to better understand AI and thus alleviate concerns arising from the regulatory process.</p><p>Peng Xiaoyan, on the other hand, said that China has taken measures to encourage the development of AI technology while limited regulation of the management of related fields to guide the development of AI technology with a reconciled policy and management requirements.</p>]]></content:encoded>
            <author>tiptap@newsletter.paragraph.com (taptap)</author>
        </item>
        <item>
            <title><![CDATA[Deleted Sam Altman Talk Transcript: Open AI also lacks GPUs, cost reduction is top goal]]></title>
            <link>https://paragraph.com/@tiptap/deleted-sam-altman-talk-transcript-open-ai-also-lacks-gpus-cost-reduction-is-top-goal</link>
            <guid>SszmXMKx4ItTvGRpKPwx</guid>
            <pubDate>Thu, 08 Jun 2023 08:43:24 GMT</pubDate>
            <description><![CDATA[SamAltman&apos;s European tour is still going on. Not long ago, in London, he had a closed-door discussion with the CEO of AI company HumanLooop, a company that helps developers build applications on big language models. Raza Habib, the CEO of HumanLoop, recorded the main points of the conversation and made them publicly available on the company&apos;s website. But the transcript was subsequently taken down at the request of OpenAI. This, in turn, has heightened curiosity about the conversati...]]></description>
            <content:encoded><![CDATA[<p>SamAltman&apos;s European tour is still going on. Not long ago, in London, he had a closed-door discussion with the CEO of AI company HumanLooop, a company that helps developers build applications on big language models.</p><p>Raza Habib, the CEO of HumanLoop, recorded the main points of the conversation and made them publicly available on the company&apos;s website. But the transcript was subsequently taken down at the request of OpenAI. This, in turn, has heightened curiosity about the conversation. Some speculate that some of the OpenAI ideas involved in the conversation have changed.</p><p>After browsing through the deleted minutes, Geek Park found that they not only cover Sam&apos;s short-term plans for OpenAI, but also the pressure on OpenAI to get strong support from Microsoft&apos;s cloud computing resources. After all, fine-tuning and inference of models still consumes a lot of computational resources. According to The Information, Open AI&apos;s models have cost Microsoft Azure $1.2 billion, and focusing computing resources on supporting OpenAI has limited the servers available to other Microsoft divisions.</p><p>In response, Sam said cost reduction is the primary goal right now.</p><p>Sam also revealed that services such as opening longer context windows and providing fine-tuned APIs are currently constrained by GPU resources;</p><p>In this conversation, Sam Altman responded to many of the concerns of the outside world, such as competition and commercialization:</p><p>that OpenAI will not consider releasing more products, despite having just hired a world-class product manager, Peter Deng;</p><p>that the future trend in applications is for large models of functionality to be embedded in more APPs, rather than growing more plugins on ChatGPT, since in reality most plugins do not present a PMF (Product / Market Fit);</p><p>Over the past few years, OpenAI has scaled models at a rate of millions of times, but this is not sustainable. Next, OpenAI will continue to increase model size at a rate of 1 to 3x to improve model performance.</p><p>The transcript of the conversation was made public on May 29 and deleted around June 3, according to the weblog. The following is the content obtained through backup:</p><ol><li><p>OpenAI is currently severely limited by GPUs</p></li></ol><p>As the conversation expands, the computational resources required grow exponentially</p><p>OpenAI currently has very limited GPUs, which is delaying many of their short-term plans. Sam acknowledged their concerns and explained that most of the problems are due to GPU shortages.</p><p>The longer 32k context can&apos;t yet be rolled out to more people. openAI haven&apos;t overcome the O(n^2) scaling of attention and so whilst it seemed plausible they would have 100k - 1M token context windows soon (this year) anything bigger would require a research breakthrough.</p><p>Longer 32K contexts are not yet available to more people. OpenAI has not yet overcome the O (n ^ 2) scaling of attention mechanism and so whilst it seemed plausible they would have 100k-1M Token context windows soon (this year) anything bigger would require a research breakthrough. Any larger window will require research breakthroughs.</p><p>Note: O (n^2) means that the computational resources required to perform Attention calculations grow exponentially as the length of the sequence increases. o is used to describe an upper bound or worst-case scenario for how fast the algorithm&apos;s time or space complexity grows; (n^2) indicates that the complexity is proportional to the square of the input size.</p><p>Fine-tuning APIs are also currently limited by GPU availability. They do not yet use efficient fine-tuning methods like Adapters or LoRa, so running and managing (models) through fine-tuning is very computationally intensive. Better support for fine-tuning will be provided in the future. They may even host a community-based model contribution market.</p><p>Dedicated capacity provisioning is limited by GPU availability. openAI provides dedicated capacity to provide customers with private copies of their models. To obtain this service, customers must be willing to commit to paying $100,000 up front.</p><ol><li><p>OpenAI&apos;s near-term roadmap</p></li></ol><p>2023, Reducing the Cost of Intelligence; 2024, Limited Demonstration of Multimodality</p><p>Sam also shared what he sees as the interim near-term roadmap for the OpenAI API.</p><p>1.</p><p>Cheaper and faster GPT-4 ーー This is their top priority. Overall, OpenAI&apos;s goal is to keep the &quot;cost of intelligence&quot; as low as possible, so they will work hard to continue to reduce the cost of the API over time.</p><p>Longer Context Window ーー In the near future, the context window could be as high as 1 million Tokens.</p><p>Fine-tuning the API ーー The fine-tuning API will extend to the latest model, but the exact form will depend on what developers indicate they really want.</p><p>A stateful API (stateful API) - when calling the chat API today, you have to go through the same session history over and over again, paying the same tokens over and over again. there will be a future version of the API that remembers the session history.</p><p>2024:</p><p>Multimodality - This was demonstrated as part of the GPT-4 release, but can&apos;t be extended to everyone until more GPUs come online.</p><ol><li><p>Commercialization prognosis and thoughts: plugins &quot;no PMF&quot; and may not appear in the API soon</p></li></ol><p>Many developers are interested in accessing ChatGPT plugins through the API, but Sam says he doesn&apos;t think they will be released anytime soon. With the exception of the Brosing plugin, usage of other plugins suggests there is no PMF (Product/Market Fit) yet. He noted that many people think they want their apps to be inside ChatGPT, but what they really want is for ChatGPT to be in the app.</p><ol><li><p>In addition to ChatGPT, OpenAI will avoid competing with its customers</p></li></ol><p>Great companies have a killer app</p><p>Many developers say they are nervous about developing with the OpenAI API because OpenAI could eventually release products that are competitive with them, Sam said, adding that OpenAI will not release more products outside of ChatGPT. Historically, he said, great platform companies have a killer app. chatGPT will allow developers to become customers of their own products to improve the API. chatGPT&apos;s vision is to be a super-intelligent work assistant, but many other GPT use cases, OpenAI won&apos;t address.</p><ol><li><p>Regulation is needed, but not now</p></li></ol><p>&quot;I have doubts about how many individuals and companies are capable of holding big models.</p><p>While Sam called for regulation of future models, he doesn&apos;t think existing models are dangerous and thinks it would be a big mistake to regulate or ban them. He re-emphasized the importance of open source and said that OpenAI is considering making GPT-3 open source. They haven&apos;t open sourced it yet, in part because he is skeptical about how many individuals and companies have the capacity to hold and service large language models (LLMs).</p><ol><li><p>The law of scale still applies</p></li></ol><p>The rate of scaling millions of times over several years cannot last forever.</p><p>There have been many articles recently claiming that &apos;the era of giant AI models is over. This is not accurate. (Note: At an event at MIT in April, Sam Altman said that we are now nearing the end of the era of giant models.)</p><p>OpenAI&apos;s internal data shows that the laws of scale for model performance still apply, and that increasing the size of models will continue to improve performance.</p><p>Since OpenAI has scaled its models millions of times in just a few years, this rate of scaling is unlikely to continue. This does not mean that OpenAI will not continue to try to make models larger, but rather that they may double or triple in size each year, rather than increase by many orders of magnitude.</p><p>The law of scale effectively has important implications for the AGI development timeline. The law of scale assumes that we probably already have most of the elements needed to build AGI, and that the work that remains is largely a matter of scaling existing methods to larger models and larger data sets. If the time for scale has passed, then we may be even further away from AGI. The fact that the laws of scale continue to apply strongly suggests a much shorter timeline.</p>]]></content:encoded>
            <author>tiptap@newsletter.paragraph.com (taptap)</author>
        </item>
        <item>
            <title><![CDATA[Apple Vision Pro is not a "savior"]]></title>
            <link>https://paragraph.com/@tiptap/apple-vision-pro-is-not-a-savior</link>
            <guid>B3p1OxJlzWvkaykrdTRx</guid>
            <pubDate>Thu, 08 Jun 2023 08:41:42 GMT</pubDate>
            <description><![CDATA["The Mac brought us into the era of the personal computer, the iPhone brought us into the era of mobile computing, and Vision Pro will bring us into the era of spatial computing!" These were the opening words of Apple CEO Tim Cook as he introduced the Vision Pro, the MR device announced today, and the highest expectations for a product that has been eight years in the making and repeatedly jumped the gun. MR was undoubtedly the topic that received the most attention from the outside world at ...]]></description>
            <content:encoded><![CDATA[<p>&quot;The Mac brought us into the era of the personal computer, the iPhone brought us into the era of mobile computing, and Vision Pro will bring us into the era of spatial computing!&quot;</p><p>These were the opening words of Apple CEO Tim Cook as he introduced the Vision Pro, the MR device announced today, and the highest expectations for a product that has been eight years in the making and repeatedly jumped the gun.</p><p>MR was undoubtedly the topic that received the most attention from the outside world at this year&apos;s Apple Developer Conference (WWDC 2023). Just a few hours before the event, Cook himself tweeted that &quot;this year will be Apple&apos;s best developer conference ever.</p><p>After a long, if tasteless, 80 minutes - with executives introducing new developments in iOS, iPadOS 17, macOS, watchOS, tvOS and other traditional projects - the long-awaited &quot;One More thing The &quot;One More thing&quot; session finally arrived, with Apple Vision Pro making its grand finale appearance.</p><p>Vision Pro not only brings together Apple&apos;s eight years of hard work, but also carries the expectations of many VR and AR practitioners. VR and AR have long been seen as the carrier of next-generation smart hardware devices after smartphones, but after years of sinking and completing rounds of industry reshuffling, the industry still seems to have not ushered in the inflection point of the explosion.</p><p>Given the glory of Apple in the Mac and iPhone era, people have reason to believe that Apple will continue this success, lead the market demand and become the last &quot;savior&quot; of the VR and AR industry this time.</p><p>Cook&apos;s comments on &quot;opening the era of space computing&quot; also seem to push Vision Pro to an obligatory height. But can Apple really do it?</p><p>We interviewed several VR and AR practitioners to see if the release of Apple&apos;s Vision Pro can really open up a new year.</p><ol><li><p>The three highlights of Vision Pro</p></li></ol><p>As the most important hardware product released by Apple after 2014, Vision Pro still brings a lot of surprises.</p><p>In terms of overall appearance, Vision Pro is a beautiful product. Fei Wu, founder and CEO of Bright Vision, told &quot;A Light Year&quot;: &quot;Apple&apos;s ID design is absolutely artistic, from the high light skin-friendly material to the wide hindbrain frame, to each button and the one-piece high lens - to use a common saying, it&apos;s &apos;piled high with money&apos; ergonomic answer. It&apos;s what I call, Apple&apos;s design departure-as-standard, worthy of everyone&apos;s copywork.&quot;</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/13843b71b6468c1174b3bd673f6fd10bcc8a22d7417c675e8f4a8334338df2e6.png" alt="" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="hide-figcaption"></figcaption></figure><p>Hardware-wise, Vision Pro has a top-notch configuration.</p><p>Chip: Vision Pro adopts a dual-chip design - a Mac identical M2 chip to solve the arithmetic problem, and a real-time sensor processing chip R1 to solve the latency problem. R1 can handle inputs from 12 cameras, 1 sensor and 12 microphones to ensure the content is presented in real time. To solve the problem of heat dissipation caused by high computing power, Apple has added two fans to Vision Pro to help dissipate heat.</p><p>Screen: Vision Pro is equipped with a micro-OLED screen with a total of 23 million pixels on both screens, which equates to more pixels per eye than a 4K TV, and supports 4K wide color gamut video and HDR rendering.</p><p>Sensing: Vision Pro has a total of 12 cameras covering the full range of forward, lateral and downward facing, front-facing LIDAR as well as infrared cameras and LED illuminators, 5 sensors scanning the surrounding environment non-stop, moving the user&apos;s surroundings into the virtual space as is in real time; also equipped with Optic ID iris recognition technology, after the user brings the device, the device can confirm through the pupil user identity. In terms of privacy, the device does not capture the user&apos;s gaze path during eye movement until the user&apos;s finger pinches to select the icon, then the device will record the user&apos;s behavior path.</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/5e98d9985857ee8553a3f6c9d4638e4c92142fed2898c1f571486b1c0b9797d6.png" alt="" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="hide-figcaption"></figcaption></figure><p>Based on a stacked hardware configuration, Vision Pro brings a new way of interaction: instead of using a joystick, users can control the device through gestures, eye movements and Siri&apos;s voice.</p><p>According to Wu Dezhou, the founder and CEO of To the Unknown, Vision Pro defines the MR 3D spatial interaction method, which is relatively &quot;revolutionary&quot;. If this interaction method is unanimously accepted, the future application ecosystem will develop applications according to this unified interaction method, which is a great benefit for application developers.</p><p>However, Vision Pro, in order to reduce weight, does not have a built-in battery, still dragging a physical line external power supply, the battery life is only 2 hours. The specific weight of the Vision Pro was not disclosed at the conference.</p><p>For nearsighted users, Apple has worked with Zeiss (Zeiss) optical inserts, which are fixed to the lens by magnetic attachment, thus ensuring that nearsighted users can achieve precise viewing and eye tracking even after wearing them. This personalized optical insert requires an additional fee, but Apple and Zeiss did not give a clear price.</p><p>It&apos;s worth noting that Apple did not specify directly at the event whether Vision Pro would be a VR or AR product, a headset or glasses, but rather renamed it a &quot;spatial computing device&quot;. At the top right of Vision Pro, Apple designed a knob to adjust the immersion by seamlessly blending the ratio of &quot;virtual reality&quot; and &quot;augmented reality&quot;.</p><p>Many practitioners are also discussing the details of this Apple product definition. According to Wu Fei, the Vision Pro&apos;s knob is a new standard for spatially interactive devices and is a &quot;brilliant design.</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/4256a77219041f41274c1f4c5bba878af38f244a9bd267219845587ce34c2df4.png" alt="" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="hide-figcaption"></figcaption></figure><p>XREAL (formerly Nreal) founder and CEO Xu Chi said: &quot;If Apple&apos;s product is defined as MR, XREAL is now called AR, there is no conflict between the two, the same is spatial computing and 3D, the difference is that one is more like a computer (extreme immersion, less portable), one is more like a cell phone (portable, fragmented use, but not so immersive). (so immersive), I believe they will coexist in a long time.&quot;</p><p>The person in charge of PICO said, &quot;The integration of reality and virtual is an industry trend, such as a product can have both &apos;VR + MR&apos; dual characteristics, and this trend has started since 2022. The launch of Apple Vision Pro will accelerate this trend while providing the industry with a new technical route reference in the integration of reality and imagination.&quot;</p><p>Rokid founder and CEO Zhu Mingming also said, &quot;The future only uses the word AR, it&apos;s just the difference between VST (video see-through) and OST (optical see-through) technology and products.&quot;</p><p>On the software side, Apple still follows its usual practice of releasing supporting operating systems when launching hardware products. This time was no exception.</p><p>Apple created visionOS, a spatial computing operating system specifically for Vision Pro, with a dedicated app store and compatibility with iOS, iPadOS and other systems, seamlessly dovetailing with the software and hardware ecological resources Apple has accumulated over the years.</p><p>To the unknown Wu Dezhou believes that Vision Pro is a very practical tool platform: &quot;Vision Pro does a better point is that it does not spend a lot of energy on 3D, digital people and other virtual content, like other VR products, but focuses on the integration of Apple&apos;s ecology. Perhaps when the application products are rich enough, Apple may really open the next era of spatial computing.&quot;</p><p>In summary, stronger computing power, more innovative interaction and a richer developer ecosystem are the three main highlights of Apple Vision Pro.</p><ol><li><p>The next &quot;iPhone moment&quot;?</p></li></ol><p>Despite Cook&apos;s enthusiasm in the field to define the next era, but the outside world&apos;s evaluation of the Vision Pro is mixed.</p><p>The most obvious was the stock price movement. During the launch event, Apple&apos;s stock price had been on the rise before the announcement of Vision Pro, once up over 2%, but after the launch of Vision Pro, Apple&apos;s stock price began to decline. By the close of the U.S. stock market, Apple shares were trading at $179.58 per share, down 0.76%, with a total market value of $2,824.6 billion. It is evident that investors&apos; confidence in Vision Pro is not high.</p><p>Perhaps, the lack of investor confidence comes from the imperfection of the product itself.</p><p>First of all, the launch event, Apple did not open the real machine to consumers to experience, and even the media, who were filming at the scene, were prohibited from touching it directly. vision Pro has to wait until 2024 to be listed, which reminds people of the fate of Apple&apos;s last product that was only released but not listed - AirPower.</p><p>Apple released its wireless charging product, AirPower, in 2017 and teased its launch in the coming year. But after skipping eighteen months, Apple officially axed AirPower, saying it couldn&apos;t live up to expectations. While AirPower&apos;s product status is not comparable to that of the Vision Pro, the concern is not without merit.</p><p>Even from the scenarios that have been demonstrated, Vision Pro still seems to be some distance away from the &quot;iPhone moment&quot;.</p><p>In terms of application scenarios, Apple&apos;s presentation focused on office and entertainment scenarios such as movies, games, and social networking. In some people&apos;s view, these features do not seem to be just needed, and some even quipped, &quot;If I talk to my colleagues with this in the office, my colleagues may think I have a problem.&quot;</p><p>In this regard, tribute to the unknown Wu Dezhou also said, on the one hand, did not see a particularly revolutionary application scenario, the current office, entertainment scenarios can only be used in relatively closed spaces. On the other hand, also did not see the product side and AI deep combination of scenarios. &quot;After the mobile Internet era, the AI Internet era will come in full force, AI mobile Internet era, AR, VR products should be to have more combination with AI technology, so as to have a better experience and application scenarios.&quot;</p><p>In addition, Vision Pro&apos;s content ecology is relatively thin. Disney also as an important ecological partner this time, released a more spatial sense of entertainment content and interactive games. At this stage, Apple still relies heavily on ecological partners with strong technical strength to join. &quot;I judge that Apple&apos;s expectations for Vision Pro, the core mission of the 1/2 generation is to pry the ecology and form more new application ecologies that are in line with the Vision Pro experience and device capabilities.&quot; Bright Vision Wu Fei said.</p><p>The Vision Pro is priced at $3499, about 25,000 yuan, three times the price of Meta&apos;s high-end Meta Quest Pro helmet, many users shouted &quot;I can&apos;t afford it! &quot;Many netizens exclaimed that they could not afford it.</p><p>The person in charge of the Bright Wind Stage said, &quot;The price is high, which also shows that the real mass AR products are still facing a big challenge.&quot;</p><p>Based on all the above &quot;imperfections&quot;, Apple&apos;s internal attitude toward Vision Pro was also polarized.</p><p>In terms of release time, the product&apos;s engineering team wanted to make the device lighter before releasing it, and it would take a few years to do so, which is more in line with Steve Jobs&apos; tradition of pursuing the ultimate product experience; while the team led by COO Jeff Williams believed that the time was ripe for release.</p><p>Apple has also received mixed reviews internally about the future development of the product. Apple held an internal meeting this year where about 100 executives watched an early demonstration of the device at the Steve Jobs Theater. But eight current and former employees commented anonymously that they were concerned about the device&apos;s roughly $3,000 price tag, skeptical of its usefulness and concerned about its unproven market.</p><p>Some experts also expressed skepticism. AR/VR optics expert Karl Guttag previously wrote in an article that Apple&apos;s development of an AR/VR headset was more like a fear of missing out on new technology trends. Looking back at some key AR products in the past, from Google Glass, Magic Leap, HoloLens, although a lot of research and development costs were invested, but the market did not give the expected feedback.</p><p>But from the result, Cook is not Steve Jobs after all, he chose to stand on the side of the operation team. After all, whether Cook can leave a strong mark in Apple&apos;s history, and MR devices can be released as scheduled is closely related.</p><ol><li><p>Apple Vision Pro is not the savior</p></li></ol><p>After several years of development, it seems that the spring of VR and AR industry has not yet arrived.</p><p>According to IDC data, global shipments of AR and VR headsets slowed sharply in the first quarter of 2023, with the overall AR/VR headset market down 54.4% year-on-year in the first quarter.</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/d350473ffcdd72492a7bcb07c4eb3295fa3a19624c88282fb9bf82d34a3ed653.png" alt="" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="hide-figcaption"></figcaption></figure><p>VR, AR itself is a very difficult business. xREAL Xu Chi once said playfully: &quot;Why is AR so difficult? Because Hard itself has AR in it. Then what is more difficult than AR? HD AR, because the whole HARD is HD AR.&quot;</p><p>From a technical point of view, the design and manufacturing of AR devices still face many physical law limitations, and related technologies still need major breakthroughs. In addition, AR/VR headsets also face problems such as diffraction and light collection rate due to the small size of the display/optical module used. Even VST cannot solve the problem of AR optics, because VST mixed reality headsets still have limitations in social, security, and ergonomics.</p><p>The design and manufacturing difficulties of AR devices themselves require companies interested in layout to invest a lot of money and time costs, which requires more strength for the entry companies.</p><p>In fact, Apple has been investing in the AR field for many years. 2015 Apple launched the AR/VR project codenamed T288, which includes the MR headset codenamed N301 and the AR glasses codenamed N421.</p><p>Since 2015, Apple has acquired different companies in the AR industry chain almost every year, including facial tracking companies, eye-tracking companies, AR companies, VR companies, AR lens companies, VR content companies, and VR headset production companies in multiple fields.</p><p>In terms of the corresponding software ecology, Apple released ARKit (AR Toolkit) back in 2017 and will also announce the most recent version at WWDC every year. By 2022, ARKit has iterated to the sixth generation. In addition, AR Maps, Object Capture, RealityKit 2 and other new AR features have been introduced.</p><p>The above achievements are the result of Apple gathering more than a dozen executives and consistently dropping billions of dollars. Some practitioners said, &quot;If even Apple can only do this, it shows that the industry is indeed difficult&quot;.</p><p>But even with the perfect product, who will use the product and where it will be used are the problems that companies need to face.</p><p>Before Apple released Vision Pro, many practitioners were looking forward to Apple&apos;s entry, and even saw Apple as the last &quot;savior&quot;. However, the practitioners have been quite rational.</p><p>I personally have a very obvious feeling that the contrast between the innovation, attractiveness and influence of the products before and after the one more thing Apple launch is very obvious, and the public expectation and innovation drive of the &apos;next generation computing device&apos; has reached a critical point. The computer field needs a new epoch-making terminal. savior&apos; is slightly exaggerated, but it does boost confidence and makes everyone believe that the next generation of terminal devices after the cell phone is within reach.&quot;</p><p>PICO related person in charge said, &quot;As a very creative brand, Apple, with its brand power, technology power and supply chain integration ability, will indeed accelerate the development of XR industry. But we also need to know that XR is an industry that requires long-term investment, so we should not look at the development of this industry with the so-called &apos;one hit will win&apos; mentality. The industry is facing some extremely complex XR technology issues. We&apos;re on the verge of an exciting time, but the XR industry has a long, long way to go.&quot;</p><p>As Rokid founder Zhu Mingming wrote in his circle of friends, &quot;There is no savior for AR entrepreneurship, this launch will not bring more confidence to the industry and the market, it still needs more efforts from the explorers in the industry to prove themselves and reserve enough food and grass to fight a long and magnificent war.&quot;</p>]]></content:encoded>
            <author>tiptap@newsletter.paragraph.com (taptap)</author>
        </item>
        <item>
            <title><![CDATA[Rising to the occasion! Can Google reclaim its throne in AI?]]></title>
            <link>https://paragraph.com/@tiptap/rising-to-the-occasion-can-google-reclaim-its-throne-in-ai</link>
            <guid>VQZXTbf1OhSFWlb8kgEU</guid>
            <pubDate>Wed, 03 May 2023 15:50:33 GMT</pubDate>
            <description><![CDATA[From the first release of ChatGPT to the iterative update of GPT-4, OpenAI has continuously subverted the entire AI industry in just a few months, and a global AI arms race led by OpenAI has been launched. From Google, Meta, Amazon, Tesla and Apple abroad, to Baidu, Ali and Tencent at home, the major technology giants are like the flowers on the AIGC market and the fire cooking oil.Google, an old rival of Microsoft, has been at the forefront of the AI development wave and leading the applicat...]]></description>
            <content:encoded><![CDATA[<p>From the first release of ChatGPT to the iterative update of GPT-4, OpenAI has continuously subverted the entire AI industry in just a few months, and a global AI arms race led by OpenAI has been launched. From Google, Meta, Amazon, Tesla and Apple abroad, to Baidu, Ali and Tencent at home, the major technology giants are like the flowers on the AIGC market and the fire cooking oil.</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/f93e756c7e3a8b254f56b31682550e3c9055465f28e8975690d636f674d9e36d.png" alt="" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="hide-figcaption"></figcaption></figure><p>Google, an old rival of Microsoft, has been at the forefront of the AI development wave and leading the application of AI technology by pulling out all the stops. Now, Google&apos;s parent company Alphabet has officially announced that it will merge its two main AI research departments - Google Brain (Google Brain) and DeepMind, complemented by a major move to launch a new generation of AI search engine Google + Magi. The use of a full set of &quot;counter-attacking combinations&quot; has become Google&apos;s latest move to catch up with OpenAI and Microsoft in order not to lose out in this race and take the lead.</p><p>The Battle of Search Engines</p><p>Search engines have always been the gateway to the Internet, and the phrase &quot;Google it, you&apos;ll know it&quot; is enough to prove Google&apos;s status as the world&apos;s leading search engine in most areas. However, the word &quot;eternal&quot; is a kind of idealism incarnation, in front of the complex reality, nothing is eternal, especially in the fast-changing technology industry.</p><p>With the rapid development of artificial intelligence technology, chatbots led by Bing+ChatGPT have seized the opportunity to allow users to gather information by choosing different creative text conversation styles, and it can also assist users with various tasks such as checking the weather, ordering food, writing and work study. Bing+ChatGPT is highly regarded for its conversational capabilities, its ability to get the latest information, its ability to understand text and integrate information, and its ability to &quot;humanize&quot; and creatively express itself, providing users with a more convenient, personalized and fun search experience, making it the biggest hit in the search engine market today. It has become a big hit in the search engine market today.</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/b77aaab177869b3727594fd85f34d5cc03b5980c13cdb32e8fea81dc577623f7.png" alt="" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="hide-figcaption"></figcaption></figure><p>The successful application of Bing search engine + ChatGPT technology has brought Microsoft a sound reversal, showing its strong strength in the field of artificial intelligence, allowing us to see the broad future prospects and development potential of Bing in the global search market, however, it has also put the central position of Google&apos;s search engine under serious threat.</p><p>Since Microsoft released its new Bing in February, traffic to the new Bing has grown by 15.8%, while Google has dropped by 1%. While 1% may seem small, it has had a serious impact. A report published in the New York Times said that Samsung is considering replacing Google with Bing as the default search engine on its devices.</p><p>In the face of such a situation, Google in addition to the previously announced the launch of a chatbot called Bard as its search engine &quot;partner&quot;, is also rapidly building a new AI-based search engine called Magi to compete with Microsoft&apos;s new Bing.</p><p>As a search engine based on artificial intelligence technology, Magi was created to provide users with efficient, accurate and personalized search results. Its most important feature is &quot;personalized search&quot;, by using natural language processing technology and the use of artificial intelligence technology to deep learning of user&apos;s search habits, so as to better adapt to the user&apos;s search needs, to provide more accurate search results, but also after the search results can generate advertising.</p><p>For more detailed information, we don&apos;t know at this time, but Google spokeswoman Lara Levin said in a statement, &quot;Not every brainstorming group or product idea rolls out, but as we&apos;ve said before, we&apos;re excited about bringing new AI capabilities to search and will share more details soon. &quot; It is understood that AI search Magi will go live in May, so stay tuned for the battle of the search engines across generations.</p><p>Join hands and accelerate</p><p>Google in the battle of AI seems to be a &quot;rout&quot;, Bard release is slightly rushed, to bring a sense of shock and breakthrough is not as big as the Microsoft release, this time if the family is not united in unity, how can we keep the once in the field of artificial intelligence in the leading position in the &quot; The throne&quot; it? So, Google is ready to enlarge the move.</p><p>On April 20, local time, Google CEO Sundar Pichai (Sundar Pichai) announced that it will merge DeepMind and Google Brain, two artificial intelligence research departments, the establishment of a new department called Google DeepMind. The merged Google DeepMind will be led by DeepMind CEO Demis Hassabis, with former Google AI chief Jeff Dean taking over as chief scientist.</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/5d51a352bb5406cae2210343805c1d0ff60d9575f922f84a19d791a39ae8affc.png" alt="" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="hide-figcaption"></figcaption></figure><p>In fact, DeepMind was acquired by Google for $500 million back in 2014, but Google has long treated Google Brain and DeepMind as two separate teams. In order to avoid the problem of resource competition brought about by overlapping technologies, there has been little communication and cooperation between the two divisions, which have been kept at a distant distance.</p><p>Over the past decade, DeepMind&apos;s achievements in artificial intelligence are reflected in the launch of models such as AlphaGo, Transformers, word2vec, WaveNet, AlphaFold, and distributed systems and software frameworks such as TensorFlow and JAX. The famous AlphaGo, which defeated Go world champion Lee Sedol in 2016, and the AlphaFold system, which can predict protein structures, have the potential to have a profound impact on the medical industry.</p><p>The Google Brain team, on the other hand, is dedicated to applying the latest artificial network technologies to Google&apos;s business, and they have had early success in areas such as machine translation and autonomous image recognition. The team is currently working on the popular converter architecture that powers large AI systems around the world, the TensorFlow machine learning software library, and new techniques for training and scaling large language models.</p><p>With more and more AI products coming out, Google, which is under tremendous pressure, realized it needed to take some action to defend its position and decided to have its two aces join forces and work side by side. The merger will allow Alphabet&apos;s most advanced research resources to be more focused on AI modeling and applied directly to Google&apos;s Internet business.</p><p>Pichai wrote in the official Google blog, &quot;Google DeepMind will operate as a nimble, fast-paced department with clear connections and collaboration with Google Research and other departments.&quot; Hassabis also said in the open letter, &quot;With Google DeepMind, we&apos;re bringing together world-class talent in AI with the computing power, infrastructure and resources to create the next generation of AI breakthroughs and products at Google and Alphabet in a bold and responsible way. &quot;</p><p>It is reported that Google DeepMind will develop a series of powerful multimodal AI models to beat Microsoft with AGI. As for how the subsequent development, let us wait and see.</p><p>How to win in the future?</p><p>In recent years, new products have emerged in the field of generative AI, and these same products continue to refresh people&apos;s expectations for the future of AI in an amazing manner. ChatGPT, with its excellent natural language processing capabilities, has become the focus of attention, and the competition for generative AI is in full swing, with Google in the heart of the industry struggling to catch up.</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/a513d5b59a30ff7b272d71832e64468398d9e13a6e197464a4ec3ec7b15d786a.png" alt="" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="hide-figcaption"></figcaption></figure><p>Compared to the hesitant state of Open AI and Microsoft, Google, which has &quot;flopped&quot; in Bard, is clearly more cautious. Google has been improving Bard in other ways, too, and previously rolled out an update driven by a new language model called PaLM. A few days ago, the company created an &quot;experimental updates&quot; page where you can see all the changes Bard has undergone as it adds more features and bug fixes, and now, in addition to generating code, Bard can also debug and provide explanations for code snippets.</p><p>Meanwhile Google plans to introduce generative AI technology in its advertising business in the coming weeks, thus further embedding this technology in more products. Google already uses AI technology in its advertising business to generate simple prompts that encourage users to buy its products. With the latest technology adopted in the Bard chatbot, it will be possible to create more sophisticated advertising content that can even rival the professional content produced by ad agencies.</p><p>However, people familiar with the matter revealed that because Bard currently has language issues, compared to ChatGPT&apos;s powerful data processing and language output capabilities, Bard bots are not good enough and often state the wrong content in a confident manner, so they are concerned that such tools may spread false information. Even internal Google employees exposed that Google was providing low-quality information through Bard, aiming to keep up with competitors while placing a low priority on ethical commitments to technology to consider.</p><p>Google started its R&amp;D in chatbots as early as 10 years ago, but now it has fallen short in the AI wars, truly getting up early and catching up late. However, the industrial implementation of big models and generative AI is bound to be a tough long run, and Google still has a chance.</p><p>AI applications on the ground need to be able to integrate the company&apos;s existing problems, incorporate superior products, achieve breakthrough innovation, and retain users with hard power. Although Google has made considerable achievements in the field of AI, the technology in this field is changing rapidly, and constant technical support is necessary to run ahead of the competition.</p><p>Therefore, Google should strengthen the exploration and research of new technologies, and integrate its internal and external core resources to improve its R&amp;D strength; improve the user experience by improving the existing human-computer interaction, so as to attract more users to use Google products; at the same time, while developing new technologies and improving user experience, Google also needs to strengthen the protection of security and privacy, &quot;strong and socially friendly&quot;. At the same time, while developing new technologies and improving the user experience, Google also needs to strengthen the protection of security and privacy, &quot;powerful and socially friendly&quot; AI strategy is the long-term development plan; finally, Google can cooperate with other companies to jointly promote the progress of artificial intelligence technology.</p><p>After all, no one company can take on all the research and development work alone in this field, and only by sharing resources and knowledge can we achieve better results. Therefore, Google can actively seek partners to drive the whole industry forward on the basis of mutual benefit and win-win situation. I believe that with Google&apos;s continuous efforts, there will be more surprises and breakthroughs in the future.</p><p>Whether it is the Magi search engine in May, or the strong combination of two world-class AI teams, we have seen the determination of Google in the AI preparation. &quot;Technology changes the world&quot; is the common pursuit of many participants, and as its CEO said, we expect Google to &quot;dramatically improve the lives of billions of people, change industries, advance science, and serve different communities&quot; through better AI products.</p>]]></content:encoded>
            <author>tiptap@newsletter.paragraph.com (taptap)</author>
        </item>
        <item>
            <title><![CDATA[Embracing the Future: Subscribe and Get a Free NFT]]></title>
            <link>https://paragraph.com/@tiptap/embracing-the-future-subscribe-and-get-a-free-nft</link>
            <guid>ODW27f6cKz3044lxSsuB</guid>
            <pubDate>Tue, 21 Mar 2023 12:43:17 GMT</pubDate>
            <description><![CDATA[As a leading innovation company dedicated to exploring the latest trends and human insights, we understand the importance of staying ahead of the curve. That&apos;s why we&apos;re excited to announce that we&apos;re offering a unique opportunity for you to stay up-to-date with our latest insights and ideas: by subscribing to our content account, you&apos;ll receive a free, exclusive NFT! But what exactly is an NFT? In short, it&apos;s a digital asset that is uniquely identifiable and impossib...]]></description>
            <content:encoded><![CDATA[<p>As a leading innovation company dedicated to exploring the latest trends and human insights, we understand the importance of staying ahead of the curve. That&apos;s why we&apos;re excited to announce that we&apos;re offering a unique opportunity for you to stay up-to-date with our latest insights and ideas: by subscribing to our content account, you&apos;ll receive a free, exclusive NFT!</p><p>But what exactly is an NFT? In short, it&apos;s a digital asset that is uniquely identifiable and impossible to replicate. NFTs are revolutionizing the way we think about ownership and value, and they have already begun to transform the art world. By subscribing to our content account and receiving an NFT, you&apos;ll be part of this exciting new movement, and have a piece of digital history to call your own.</p><p>But why should you subscribe to our content account in the first place? Simply put, our company is dedicated to exploring the latest trends and insights in order to help you stay ahead of the curve. From emerging technologies to shifting cultural trends, we cover a wide range of topics that are relevant to your personal and professional life. By subscribing to our content account, you&apos;ll have access to exclusive articles, videos, and other content that you won&apos;t find anywhere else.</p><p>In addition to our insightful content, we&apos;re also committed to building a community of like-minded individuals who are passionate about the future. By subscribing to our content account, you&apos;ll have the opportunity to engage with other subscribers, share your own insights, and be part of a cutting-edge community that is shaping the future.</p><p>And of course, we can&apos;t forget about the NFT! By subscribing to our content account and receiving an NFT, you&apos;ll have a unique digital asset that you can cherish for years to come. Whether you&apos;re an art collector, a technology enthusiast, or just someone who wants to be part of the latest trend, our NFT is sure to be a valuable addition to your collection.</p><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://opensea.io/assets/ethereum/0x1c258483cc721051fbcC6c8088B64Bdfc0e136a2/0">https://opensea.io/assets/ethereum/0x1c258483cc721051fbcC6c8088B64Bdfc0e136a2/0</a></p><p>So what are you waiting for? Subscribe to our content account today and receive your free NFT. It&apos;s a unique opportunity to stay ahead of the curve, connect with like-minded individuals, and own a piece of digital history. We can&apos;t wait to welcome you to our community!</p>]]></content:encoded>
            <author>tiptap@newsletter.paragraph.com (taptap)</author>
        </item>
        <item>
            <title><![CDATA[Scary Week, Crypto Industry 'Crisis' May Have Only Begun]]></title>
            <link>https://paragraph.com/@tiptap/scary-week-crypto-industry-crisis-may-have-only-begun</link>
            <guid>mHAvBUzAgHHJCbRnf5j9</guid>
            <pubDate>Tue, 21 Mar 2023 06:14:19 GMT</pubDate>
            <description><![CDATA[March 12, which should be an ordinary day, is becoming special in the crypto industry, repeatedly making the crypto market &apos;jumpy&apos;. Last week, three U.S. banks went bankrupt one after another, and the crypto industry spent the day in shock...what&apos;s their magic? Is the risk in the crypto industry really over with the Fed bailout? Perhaps this is just the beginning and the real black swan may be on the way. This article does not intend to play up the panic, but cyclically, at lea...]]></description>
            <content:encoded><![CDATA[<p>March 12, which should be an ordinary day, is becoming special in the crypto industry, repeatedly making the crypto market &apos;jumpy&apos;. Last week, three U.S. banks went bankrupt one after another, and the crypto industry spent the day in shock...what&apos;s their magic? Is the risk in the crypto industry really over with the Fed bailout? Perhaps this is just the beginning and the real black swan may be on the way. This article does not intend to play up the panic, but cyclically, at least for the crypto industry, it is only next year that will be a good year to look forward to.</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/20ef79f98b65fe83bd0ae41d9de012f8b4b2a75144919c9987a6d35e75fa86ed.png" alt="" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="hide-figcaption"></figcaption></figure><p><strong>Crypto Market Nears Systemic Crisis in Frightening Week</strong></p><p>Shares of Signature Bank fell 32 percent on Friday and were halted for a second straight day as bank stocks sold off due to the Silicon Valley bank and the Silvergate debacle. On March 13, the Federal Reserve announced that New York-based Signature Bank was shut down by New York state regulators on Sunday. This is the third bank failure in a week, and all three have one big thing in common - they all have crypto operations and are pivotal to the crypto industry. Last week was indeed a close one, and the crypto market even staged a &quot;heart attack&quot; with its own movements. The following highlights these three unlucky banks on the windfall.</p><p>Silvergate offers the Silvergate Exchange Network (SEN), a 24/7 fiat-cryptocurrency exchange network that is necessary for the proper functioning of the crypto market and plays a vital role in facilitating the transfer of funds between large investors and cryptocurrency trading platforms. Without it, it would be difficult for institutions to move in and out of the crypto industry; furthermore, the flow of funds to banks is limited by business hours, which is very unfriendly to the 24/7 crypto market.</p><p>Signature&apos;s SigNet network was one of the alternatives to SEN, but was taken offline with the closure of Signature, a New York State chartered commercial bank insured by the FDIC with total assets of approximately $110.36 billion and total deposits of approximately $88.59 billion as of December 31, 2022.</p><p>Silicon Valley Bank is a leading bank in the venture capital community, with access to 600 venture capital firms, 120 private equity firms worldwide, and over 50% of the startup credit market, with many startups having exposure to Silicon Valley Bank, and it is also a significant player in the crypto industry. It has $2.85 billion in a16z-related funds, $1.72 billion in Paradigm-related funds and $560 million in Pantera-related funds. In addition, Silicon Valley Bank has invested in several crypto companies including Coinbase, and holds some of the stablecoin reserve assets of Circle (USDC) and PAXOS (BUSD), as well as major affiliates including Plum, Keepit, Alviere, Plaid, and Paystand.</p><p>With the collapse of three banks within a week, and all of them being very important to the crypto market, it is actually understandable that the market plummeted. To put it more directly, the cash of many big funds and companies in the crypto market exists in these three banks, and the cash of many stablecoin issuers also exists here. If the three banks cannot be bailed out, the crypto market will suffer not just a negative but a systemic risk, and the financial construction of the crypto market for years will go up in flames. Now, even though the Fed has stepped in, it may already be half ruined.</p><p><strong>Crypto market risks still not lifted as Fed bails out</strong></p><p>As of March 12, 325 venture capital firms, including Sequoia Capital, signed a joint statement in support of Silicon Valley Bank; in addition, 650 founders employing more than 22,000 employees also co-signed a statement saying they are asking regulators to stop the disaster from happening, according to BizTrust. The statement from the venture capital community was led by venture capital firm General Catalyst.</p><p>Subsequently, the Federal Reserve issued an emergency release stating that the Treasury Department, with the approval of the Treasury Secretary, would allocate $25 billion from the Exchange Stabilization Fund as emergency loan support. The new financing program will be provided through the creation of a new Bank Term Financing Program (BTFP), which will provide loans to banks, savings associations, credit unions and other eligible depository institutions for up to one year, collateralized by U.S. Treasuries, agency debt and mortgage-backed securities and other eligible assets. These assets would be denominated at par and the BTFP would be an additional source of liquidity for high-quality securities, eliminating the need for institutions to sell these securities quickly in times of stress.</p><p>With the Federal Reserve stepping in to take action, the USDC stablecoin de-anchoring crisis seemed to be temporarily resolved. But this is not the case. The main cause of this crisis is still the Fed&apos;s rate hike that severely affected market liquidity, which reflects the lagging nature of the Fed&apos;s monetary policy, so how serious are the consequences of the Fed&apos;s aggressive rate hike? It is still too early to draw conclusions, and the future is full of unpredictability.</p><p>According to the latest data from CME FedWatch, the probability of the Fed raising rates by 25 basis points to the 4.75%-5.00% range in March is 96.0%; the probability of a 50 basis point hike drops to 0, compared to 73.5% observed last time (March 8).</p><p>Goldman Sachs said it no longer expects the Fed to raise rates at its March 22 meeting given the recent stresses in the banking system, according to the BizTrust. Maintaining expectations that the Fed will raise rates by 25 basis points in May, June and July, the terminal rate is now expected to be 5.25-5.5%. there is considerable uncertainty about the path of rate hikes after March.</p><p><strong>Crypto industry may become &apos;backstop&apos; as crypto regulatory chill expected to increase</strong></p><p>We have previously posted that crypto banks, which connect traditional traditional finance and crypto markets, are the most vulnerable under the dual impact of Fed rate hikes and crypto market deleveraging, and are thus the first to thunder. Simply put, during the bull market, crypto banks&apos; customer deposits from the crypto industry increased dramatically, and the liability side expanded dramatically, making lending too slow and troublesome, forcing companies to &quot;buy assets&quot;; however, in 2022, the Fed entered into a frenzied interest rate hike mode, and interest rates rose rapidly, causing bond prices to fall. To cope with the withdrawals, crypto banks were forced to sell bonds at a loss, resulting in huge real losses. With the recent strengthening of hawkish signals from the Fed and rising expectations of aggressive interest rate hikes, Silvergate and Silicon Valley Bank were eventually unable to recover and went into liquidation.</p><p>As described above, the three banks that went bankrupt within a week were all linked to the crypto industry, with Silvergate, which has the deepest ties to the crypto industry, being the first to go bankrupt. Earlier this year, a number of regulators, including the Federal Reserve, already issued warnings to the banking industry about the &quot;risks of cryptocurrencies&quot;. This time, regulators are likely to use the crisis to crack down on the crypto industry.</p><p>On March 13, U.S. President Joe Biden said he was glad that the Treasury Department had quickly resolved the Silicon Valley Bank issue, and would subsequently work to hold the relevant parties responsible for creating &quot;this mess&quot; and plan to continue efforts to strengthen the regulation of large banks.</p><p>With this comment from Biden, the crypto market seems to have felt a chill. According to The Block news director Frank Chaparro, crypto-friendly bank Signature Bank was shut down by New York state regulators on Sunday, according to The BizTweet. This will make the banking situation for crypto companies incredibly difficult and absolutely brutal. The capital markets for cryptocurrencies are essentially back to pre-2014. There is no chance of any new start-ups getting a banking partnership. In many ways, the cryptocurrency industry already officially lacks banking services.</p><p>According to Beyotime, Messari founder Ryan Selkis tweeted that in less than a week, crypto banking has effectively shut down. The message from the U.S. government is clear: cryptocurrencies are not welcome in banking. From now on, the crypto industry should go all out to protect and promote USDC.</p><p><strong>The &quot;black swan&quot; may not be here yet, and next year may be great</strong></p><p>Will this round of crypto banking crisis spread further to other US banks? How will the crypto industry evolve in the future?</p><p>Taking advantage of the alarming Silicon Valley banking crisis, Circle CEO Alel reiterated his stablecoin &apos;vision&apos;: We have long advocated for full-reserve digital currency banking, isolating the underlying layers of our &apos;internet money&apos; and payment systems from fractional reserve banking risk. In fact, the Payments Stabilization Coin Act remains very actively pursued by Congress and would provide in law a system under which stabilized coin funds would be held in cash at the Federal Reserve and in short-term Treasury securities. If we want a truly secure financial system, we need this law now more than ever.</p><p>All things considered, the real &apos;black swan&apos; may still be missing this year, and the crypto market will probably continue to be tested. And from a cyclical perspective, some have already painted a rosy picture for next year, with the Fed expected to cut interest rates once in 2024 from Goldman Sachs&apos; analysis; and Bitcoin&apos;s next halving is expected to occur on April 20, 2024, so if history repeats itself, based on Plan B&apos;s data model calculations, then Bitcoin will rise to $36,000 before the next halving, and to $149,000 after that. With these two cycles stacked up, it seems reasonable to be wistful about what we can expect next year, and as for this year, please be careful.</p><p>Translated with <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="http://www.DeepL.com/Translator">www.DeepL.com/Translator</a> (free version)</p>]]></content:encoded>
            <author>tiptap@newsletter.paragraph.com (taptap)</author>
        </item>
        <item>
            <title><![CDATA[a16z: Why should NFT creators choose cc0?]]></title>
            <link>https://paragraph.com/@tiptap/a16z-why-should-nft-creators-choose-cc0</link>
            <guid>DcQMlst1IiZDV4CLZhgW</guid>
            <pubDate>Fri, 05 Aug 2022 08:09:43 GMT</pubDate>
            <description><![CDATA[Thousands of innovations are automatically remitted to the public domain on January 1 of each year, which also means that on "Public Domain Day," the copyright holder or creator relinquishes ownership of the work of art, including reproduction, adaptation, or publication, and is free for all to use. Films, poetry, music, books, and even source code are fully subject to this guideline, and public domain protection for the creation of a work usually lasts until 70 years after the author&apos;s ...]]></description>
            <content:encoded><![CDATA[<p>Thousands of innovations are automatically remitted to the public domain on January 1 of each year, which also means that on &quot;Public Domain Day,&quot; the copyright holder or creator relinquishes ownership of the work of art, including reproduction, adaptation, or publication, and is free for all to use. Films, poetry, music, books, and even source code are fully subject to this guideline, and public domain protection for the creation of a work usually lasts until 70 years after the author&apos;s death.</p><p>Public domain copyright disclosure opens the door to a new world of uncreative works. In the first half of the year, approximately 400,000 pre-1923 recordings and first-edition Winnie the Pooh comics that had not yet worn the red sleeves of famous Disney employees were put into the public domain and copyrighted. Author A.A. Milne could never have imagined that Pooh and her friends, born in 1926, would be given the colors and expression of 2022 by a community of creators. Even in some creators&apos; pens, the bear holding the honey jar has become the protagonist of the horror movie Winnie the Pooh: Blood, Tears and Honey, starring Pooh and Pee Jay the Pig as the villains.</p><p>Experimental reimaginings may add more value to technology-based IP than many classic IPs that continue in their original creation style, allowing the public to build on existing technologies, which is the core driver of the open source movement. A large part of what makes Android, Linux, and other successful open source software projects so competitive is their drive to embrace &quot;license-free innovation. The success of cryptosystems, and the NFT community in particular, in attracting public development is similarly due to its general acceptance of open source and the &quot;remix culture&quot;.</p><p><strong>Capturing product memory</strong></p><p>The strategy of building brand, community, and content through IP varies greatly from NFT project to NFT project. Some projects maintain more or less standard IP protection regimes; others only give NFT owners the right to innovate on the relevant IP; and still others choose to simply take off and remove copyright and other IP protections altogether.</p><p>By distributing digital works with &quot;Creative Commons Zero&quot; or &quot;cc0&quot; permissions, creators can choose not to retain separate ownership while retaining the right to know. This permission allows the owner to produce peripheral products without the legal consequences of &quot;second creation&quot;. [Special Note: NFT does not have complete regulations regarding copyright, so the so-called copyright here has nothing to do with legal, financial, tax or investment matters. The focus of this article is on cc0, the NFT copyright loophole, and what creators can do to defend the rights of their owners]</p><p>The widespread use of cc0 in the NFT space first began with the Nouns project in the summer of 2021. A Common Place, Anonymice, Blitmap, Chain Runners, Cryptoadz, CryptoTeddies, Goblintown, Gradis, Loot, mfers, Mirakai, Shields, and Terrarium Club projects are quickly following and generating more and more quality new derivatives based on them.</p><p>Meanwhile, renowned crypto artist X_COPY, who placed his iconic 1-of-1 NFT artwork &quot;Right-click and Save As Guy&quot; under a cc0 license in January, has seen an influx of spin-offs just a month after it went on sale.</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/239cdb291b260aed2759b460b26c0044381bc53007c235029013f3e608748fd3.png" alt="" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="hide-figcaption"></figcaption></figure><p>On Monday, X_COPY went straight for the jugular, announcing its intention to go &apos;all-in&apos; and apply cc0 to all artwork. The artist added: &quot;We haven&apos;t really seen a cc0 summer yet, but I believe it&apos;s coming ......&quot; -- perhaps hinting at a potential period of growth for cc0 that will attract countless go-getters in 2020. -- perhaps hinting at a potential growth period for cc0, similar to the &quot;Summer of DeFi&quot; that attracted so many decentralized followers in 2020.</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/f84d064bb8445eb92a4a3eb3eb73836fcb5d9bd1aa233bf6096d6c7e3d09adf4.png" alt="" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="hide-figcaption"></figcaption></figure><p>Why are so many NFT creators going &apos;copyright-free&apos;?</p><p>One reason can be simply attributed to &apos;all for the culture&apos;, i.e. to promote native projects in order to bring about a more vibrant and engaged community. This makes extra sense in the context of crypto plus, where open sharing, finding and building community, is part of the core philosophy of many crypto enthusiasts.</p><p>Creative work survives or dies by its cultural relevance. While NFT naturally has the ability to prove ownership of all digital items without the need for the author to consider licensing, cc0 is designed to extend the &apos;memory curve&apos; of the original work by giving everyone the unhindered right to recreate - as new derivatives continue to appear and spread, people&apos;s attention will naturally flow to the original work As new derivatives continue to emerge and spread, people&apos;s attention will naturally flow to the original work, enhancing its attention in the entire crypto world, thus inspiring more re-creations and creating a flywheel effect - each derivative adds value to the original work, similar to the platform network effect, where the addition of users makes the platform more valuable to them.</p><p>In other words, cc0 makes it easier for creators to &quot;capture the memory of the product&quot;.</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/54748430f550e47e289fcdcfc3908f7ee93b7a4ecd392b3ab54f790e858da23c.png" alt="" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="hide-figcaption"></figcaption></figure><p>But the use of cc0 across the digital landscape is just beginning - real-world physical products are also producing derivatives of cc0NFT assets. the iconic boxy glasses from NounsDAO NFT (&quot;one per day, forever&quot;) have been made into real luxury sunglasses by the Nouns Vision project. Blitmap&apos;s pixel art has appeared with complete freedom on shoes, clothes and hats from various production companies. This is clearly in stark contrast to the traditional intellectual property model where a single owner has a strong say in creation, licensing and production.</p><p>The hat with the Blitmap logo actually has multiple cc0 levels: the underlying cc0 entity &quot;blitcap&quot; is actually a derivative with features from the second level cc0 Chain Runner line, while it is also a product of the first level cc0 Blitcap logo &quot;original&quot;. The logo is in fact Biltmap&apos;s token 84, one of several &quot;originals&quot; that often serve as inspiration for the series (the others are &quot;Dom Rose&quot;, token #1, etc.). The &quot;homage to the classics&quot; that second creators are coming up with also reflects Blitmap&apos;s huge influence throughout the crypto world as a cc0 leader and as one of the first major NFT projects to announce its entry into the public domain.</p><p>These types of derivatives are a win-win for both the original creator and the label creating the derivative, especially for the derivative maker. The derivative borrows some brand recognition from the original project, and when the derivative is presented to the public as a standalone product, people take a new interest in the original project. For example, people seeing someone wearing Nouns glasses on the street or in an ad space and wanting to buy a pair of their own is one perspective, as is the possibility of interest in buying the original NounsDAO NFT or other related derivative products. [In fact, the second author of this article first discovered Blitmap through Chain Runners, and although Blits were well beyond his purchasing power, he ended up buying a few &quot;Flipmap&quot; derivatives.]</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/35c04162ec95feffe527f0b0d3b96c5c66bfb1ff1aca0f017168102a6c041d02.png" alt="" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="hide-figcaption"></figcaption></figure><p>Open Source Embraces Co-Creation</p><p>Because it is built on smart contract technology, part of the power of NFT comes from the inherent composability of the technology. Many smart contracts are developed with the positioning of &apos;building blocks&apos;, i.e., they are combined and stacked to enable richer applications.</p><p>Just as &apos;monetary Lego&apos; is used to describe combinations of decentralized finance (&apos;DeFi&apos;) smart contracts that connect to each other to form new financial use cases. ( For example, the yield aggregator Yearn interacts with MakerDAO&apos;s stablecoin $DAI and the exchange liquidity provider Curve, among others, by simply invoking public functions on their smart contracts) From the same combinability perspective, NFT and its underlying smart contracts can serve as the basic conditions for cultures and ideas to have the ability to reconfigure and stitch together.</p><p>Cc0, on the other hand, is a guarantee that the above actions can be successfully implemented with the permission of the original creator, allowing NFT enthusiasts to obtain an explicit mandate from the community to build new layers of value whenever and wherever they wish.</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/34295b530c04701bedc0d0348bf667026ba24f231a76bcaddefe6dd26a8247ea.png" alt="" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="hide-figcaption"></figcaption></figure><p>For the more far-reaching &quot;open source&quot;, cc0 is even on the same level as the rise of Linux. In the days when web 2.0 was a new concept, Microsoft controlled most of the operating system market with its closed-source operating system Windows. But Linux (and its creator Linus Torvalds) embraced a community-first spirit, opening up the source code for all to use and modify, without any restrictions on distribution. The value proposition of Linux is enhanced by the worldwide availability of open source software in the system, which ultimately leads to explosive growth and further innovation throughout the industry. According to market analyst Truelist, Linux today accounts for more than 96.3 percent of the top 1 million web servers and 85 percent of smartphones.</p><p>As cc0 licensing begins to empower NFT community builders in a similar way, one can imagine a long-term trajectory of innovation. According to NounsDAO co-founder punk4156&apos;s &quot;logic-lego&quot; vision, which combines cc0 with NFT, &quot;transforms adversarial gaming into cooperative gaming. This vision has several key points: first, since decentralized systems from open source to crypto are about trust and coordination between strangers, enabling opportunities for collaboration is key to success; second, in the world of NFT, this collaboration reinforces people&apos;s ownership of digital assets, giving them an incentive to continue creating while holding the assets, and in doing so, enhancing the artistic value of the original digital assets, creating a virtuous circle.</p><p><strong>A &quot;license&quot; for creativity</strong></p><p>If the cc0 project were analogous to a single open source &quot;application&quot; or &quot;platform,&quot; the NFT artwork, metadata, and smart contracts would provide the &quot;user interface&quot; while the underlying blockchain (e.g., Ethereum) would be the &quot;operating system. However, in order for these applications to reach Linux-like potential, additional supporting infrastructure services need to be created and brought to a level where they can be invoked at any time so that people can make the most of the &apos;reverberation&apos; opportunities created by cc0.</p><p>These services are already beginning to take shape. For example, the &apos;hyperstructure&apos; Zora protocol and OpenSea&apos;s open source Seaport protocol are the technical foundations for building an open and license-free marketplace for NFT transactions. Recently, a pixel painting rendering engine was publicly released to the ethereum blockchain and has been integrated into projects such as OKPC and ICE64. Each successful application adds fuel to the fire of the blockchain&apos;s ability to meet the &quot;out-of-the-bag&quot; standard, with more and more new applications being born in the growing number of blocks.</p><p>While the growth of web3 developers is at an all-time high and expanding rapidly, the total still represents only a small fraction of the world&apos;s active software developers. Fortunately, as more developers enter the space, the ambitious NFT project may find more creative &quot;Lego&quot; to provide the technical building blocks for the cc0 project and beyond.</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/fdf0c77ad4dc40f5c6dd445a23bf8d2fb89313adf1a3f5cf2980c17da44cd01b.png" alt="" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="hide-figcaption"></figcaption></figure><p>Combinability is the key to growth. One example of this scalability in practice is the Loot project, one of the first groundbreaking projects to demonstrate the decentralized co-creation, world-building, and other aspects of NFTs. We also share this example because it is clearly &quot;flawed&quot; or even &quot;incomplete&quot; from an aesthetic perspective, but it also leaves more room for the imagination and community co-creation of all crypto enthusiasts.</p><p>To add a little background on Loot: Loot begins with a series of Loot NFTs, each of which consists of a simple list of eight &quot;adventure items&quot; in white on black (e.g. Loot Bag 5726&apos;s &quot;Katana, Divine Robe, Great Helm, Wool Sash, Divine Slippers, Chain Gloves, Amulet, Golden Ring&quot;). These loot bags were released for free by the original creator, Dom Hofmann, as a starting point for community building.</p><p>Several projects have indeed begun to develop everything from worldview building to world modeling (game development) in a short period of time, and creators from all sides have contributed many spin-offs to the &quot;Lootverse&quot;. They have produced games (Realms &amp; The Crypt); characters (Genesis Project, Hyperloot, and Loot Explorers); storytelling projects (Banners and OpenQuill); and even a level infrastructure (The Rift).</p><p>So, how do cc0 and composability apply here? Users control the foundational Loot Bags - a kind of &apos;initial bag&apos; that makes sense in many different games and worldviews - and users can use these core assets anywhere they want once they connect their wallets. This feature also allows them to participate in numerous spin-off projects, including Genesis Adventure, whose special characters are featured in other projects, essentially enabling a decentralized franchise that is not owned by any one entity.</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/2fcdf4ee145c9be7d7e9697705387275be6f4fadee880fa8a88c573f4396f666.png" alt="" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="hide-figcaption"></figcaption></figure><p>When will the summer of Cc0 arrive?</p><p>As mentioned above, there are many strategies that NFT projects can take when developing and building their IP. When it comes to cc0, it is the word &quot;reality&quot; that matters. Licenses are not Aladdin&apos;s lamp and cannot easily turn any project into a sensation - it&apos;s a pipe dream to expect the public domain to suddenly make something an unprecedented success. As with open source software, cc0 works best for promising NFTs that can empower the ecosystem.</p><p>Many of the most successful cc0 projects to date have succeeded by introducing intellectual property that can be used flexibly in a range of different contexts; the Nouns brand is as intuitive for beer ads as it is for physical glasses; Loot Bags is the initial prop for Great Adventure; the Goblintown art style looks as good on dwarves, zombies and grumpy owls as it does on Val Kilmer.</p><p>There is reason to believe that the ideal cc0 NFT project creates opportunities for builders to add value, both vertically, by stacking new content and features directly on top of the original cc0 assets (e.g., games built on the Loot ecosystem, etc.), and horizontally, by introducing different but related intellectual property that helps spread the brand of the original cc0 project (e.g., various Goblintown spin-offs, etc.).</p><p>Because cc0 NFT projects typically receive ongoing royalties from secondary sales, third-party extensions and derivatives can be a source of revenue by driving increased demand for the original cc0 assets, allowing the business model around cc0 NFT projects to benefit directly from these activities.</p><p>In addition, cc0 can reduce commercial disputes. The obsession with copyright can lead some &apos;renegade&apos; brands to force derivative works in defiance of the license, or even generate them in a way that &apos;bypasses&apos; the original. As Robbie Broome, head of the cc0 project A Common Place, explains, &quot;By giving the intellectual property to cc0 rather than &apos;protecting&apos; it, it can avoid a bad repeat of the next step. For example, if UrbanOutfitters wanted to put my design on a T-shirt, instead of hiring someone on their team to design something that looked like it, they could just use the actual piece. Sometimes, adopting cc0 can effectively turn competition into collaboration.</p><p>In addition, cc0 projects can benefit greatly from community buy-in to the value and contribution of core assets. Community cohesion and involvement are critical here. Building on the examples already mentioned above. While developers could in principle create adventure games around any theme and item concept they wanted, the community cohesion in the Lootverse is reflected in the fact that many chose to develop around the Loot bag. Meanwhile, the Blitmap spin-off project Flipmap shares a portion of their revenue with the original Blitmap artists in recognition of the project&apos;s central place in the community, a move that could foster a healthy culture within the cc0 project ecosystem. As cc0 Project reviewer NiftyPins noted, &quot;This is a smart move to honor the people who built the foundation for their universe. It also provides an environment for many of the OG Blitmap artists to speak their minds&quot;.</p><p>However, cc0 is not a solution that the entire crypto world falls for - NFTs built around established brands, for example, may prefer more restrictive licenses that protect their existing intellectual property and reinforce exclusivity. Moreover, while there are similarities between cc0 and strategies where the owner specifically commercializes NFT-related IP (e.g. à la Bored Ape Yacht Club), the key difference is that the cc0 holder does not have the right to block others from using the same IP. it is therefore more difficult for the holder to build a commercial brand on the cc0 asset, or to grant specific rights to partners, although the introduced The rights are still in the hands of the holder, and they can still choose IP that is completely under their control (e.g., backstories or derivatives).</p><p>Decentralization and open development are core elements of blockchain technology and the broader spirit of cryptocurrencies. This makes it very natural for cryptocurrency projects to be built around the cc0 content model - which is built on a creative consensus base and several groundbreaking open source pioneers - and represents perhaps one of the purest expressions of the &apos;open source philosophy&apos; to date.</p><p>As with the initiators of open source software projects, the NFT creators who choose cc0 must decide what role they will play in assembling the ecosystem around them. Some cc0 project leaders, such as the creators of Chain Runners, continue to build on the initial cc0 assets, actively establishing an environment on which spin-off projects can build. In contrast, Dom Hofmann stepped back from Loot and left it in the hands of the community. ( Dom is said to be working on other cc0 NFT projects as part of supporting the development of companies like Blitmap). Other creators have opted out altogether, such as the man who recently announced under the pseudonym sartoshi that he was withdrawing from the cc0 project he was developing, mfers, and from the NFT space altogether, releasing a final version, aptly named &quot;the end of sartoshi,&quot; and then deleting his Twitter account. mfers project&apos;s smart contract is now run by seven mfer community members&apos; multi-signature wallets control.</p><p>Regardless of the level of ongoing involvement of the original creators, cc0 licensing allows a strong community to co-create in a way that provides value to all members. As the NFT space continues to grow and mature, it is hoped that more organized infrastructure and design models will support the efforts of these creators. There may also be innovation around frameworks for value capture, as in the case of open source software. ( For example, we might envision a version of the &quot;Sleepycat license&quot; that requires proprietary software products to pay a license fee when they embed certain open source components). As creators continue to advance this space, they develop and experiment with new rights and licensing models that gradually unfold in the crypto world, ideas that far exceed the scale of today&apos;s applications. But in any case, cc0 provides a way for NFT creators to get their projects off the ground - that is, to leave it to these projects to explore the infinite possibilities that exist on their own.</p>]]></content:encoded>
            <author>tiptap@newsletter.paragraph.com (taptap)</author>
        </item>
        <item>
            <title><![CDATA[Fed's May rate meeting releases dovish signals]]></title>
            <link>https://paragraph.com/@tiptap/fed-s-may-rate-meeting-releases-dovish-signals</link>
            <guid>ajY4msJeNEnGYPS26BeE</guid>
            <pubDate>Fri, 06 May 2022 13:04:08 GMT</pubDate>
            <description><![CDATA[In the early hours of Thursday morning Beijing time, the Federal Reserve&apos;s Monetary Policy Committee released its May resolution, the Fed decided through the meeting to increase the U.S. federal funds rate by 0.5% (i.e., up 50 basis points) to reduce the high inflation rate in the United States. After the rate increase, the federal funds rate range increased to 0.75% to 1%. This is also the first time in 22 years that the Federal Reserve raised interest rates by 0.5%. At the same time, a...]]></description>
            <content:encoded><![CDATA[<p>In the early hours of Thursday morning Beijing time, the Federal Reserve&apos;s Monetary Policy Committee released its May resolution, the Fed decided through the meeting to increase the U.S. federal funds rate by 0.5% (i.e., up 50 basis points) to reduce the high inflation rate in the United States. After the rate increase, the federal funds rate range increased to 0.75% to 1%. This is also the first time in 22 years that the Federal Reserve raised interest rates by 0.5%.</p><p>At the same time, as the resolution&apos;s biggest change, the Fed formally announced a gradual tapering from June 1, with plans to gradually increase the size to $95 billion per month over three months. Specifically, the initial size of the Fed&apos;s reduction of Treasury and MBS bonds is $30 billion and $17.5 billion, respectively, and will be gradually upgraded to $60 billion and $35 billion in three months&apos; time.</p><p>In addition, the Fed&apos;s interest rate meeting released three very important signals.</p><p>1, Fed Chairman Jerome Powell said in a press conference after the interest rate resolution that some evidence suggests that core PCE has peaked. 75 basis points of rate hike is not worth considering, and a 50 basis points hike is an option for the next few meetings, possibly followed by a 25 basis points hike of 50 basis points.</p><p>2, Fed super hawk Bullard did not vote for a deeper rate hike.</p><p>3, the Fed did not refinance the month (refunding month) to raise the size of the national debt reduction to the $60 billion ceiling. It was originally thought that the Fed would reach the $60 billion cap on the national debt reduction in August in order to allow the Treasury to make some reinvestment in the refinancing month.</p><p>Apparently, a word from Fed Chairman Powell brought a collective sigh of relief to financial markets: U.S. stocks rocketed in late trading, gold approached the $1,890 mark, and bitcoin broke through the $40,000 barrier, as the entire crypto asset market rose.</p><p>Michael Brown, head of market intelligence at Caxton, commented on this, saying that the Fed&apos;s first 50 basis point rate hike since 2000 was expected, and that it was surprising that the super-hawk Bullard did not vote for a larger hike; the FOMC also hinted at further rate hikes aggressively, reiterating its willingness to raise rates to neutral levels as soon as possible; but the market has digested the sharp rate hikes, and the bar for Fed hawks has been high. Therefore, although the Fed&apos;s decision itself was hawkish, it was somewhat moderate compared to the high expectations of the market, which triggered a rally in risk assets and led to a weaker dollar, a classic case of buying expectations and selling facts, while also stimulating demand for US Treasuries.</p><p>Guosheng Securities review of the Fed&apos;s May rate meeting said that the Fed&apos;s most hawkish moment may have passed, along with the U.S. economic slowdown and falling inflation, the degree of Fed hawkishness in the second half of the year is likely to weaken, market expectations of interest rate hikes will also cool, and the current round of rate hikes may stop early next year.</p><p>In the view of others, however, this situation may only be short-lived. There are concerns that without tough measures, markets will face a vicious combination of continued high inflation and slowing growth. Traders had increasingly said that &quot;the FOMC would choose to raise rates more sharply to quell the hottest inflation in decades, a move that also increases the risk of driving the economy into recession.&quot;</p><p>Jeff Klingelhofer, co-head of investments at Thornburg Investment, said, &quot;I was surprised to see a somewhat dismissive and dovish statement on inflation. Deep down in their hearts, the Fed still insists that inflation is temporary. If inflation concerns were really that serious, they could have raised rates with more vigor.&quot;</p><p>The Wall Street Journal analysis says: Despite the rally in U.S. stocks after Fed Chairman Jerome Powell made it clear he was not &quot;actively considering&quot; a 75 basis point rate hike, the Fed&apos;s hawkish turn in the past few months has created a powerful combination. On the one hand, the Fed aims to reduce the overheated economy, which may lead to a slowdown in income growth, or even a direct contraction in profits; on the other hand, the increase in interest rates has increased the attractiveness of fixed income products compared to the stock market. The S&amp;P 500 has already pulled back 10% for the year, and as the Fed contracts further, the stock market will get tougher.</p><p>To be sure, if the stock market really falls too fast and too hard, the Fed is likely to shelve its tightening plans. Severe financial market distress will be seen as a huge economic risk that the Fed will not ignore. However, it may take a more dramatic stock market sell-off to stop the Fed&apos;s tightening pace; for example, the S&amp;P 500&apos;s roughly 20% decline at the end of 2018 prompted the Fed to shelve its rate hike plan. The biggest difference from the 2018 situation is that the concern back then was that inflation was too low, while now it is too high. In addition, wage growth was sluggish in 2018, while now wage growth is under increased pressure and the Fed is aiming to cool the job market.</p><p>In addition, the Fed may think it can ignore the stock market&apos;s decline, and for good reason. First, the recent decline in U.S. stocks is still overvalued compared to history. Therefore, further stock market declines may be seen as the release of bubbles in overpriced markets, rather than a reflection of economic problems. The second point, which may be even more important, is that a decline in stocks may not lead to economic distress, as U.S. household balance sheets are in good shape, with debt-to-income ratios significantly lower than they were during the 2008-09 financial crisis. Corporate balance sheets also appear to be performing well, in part because many firms locked in lower borrowing costs during the sharp interest rate cuts of the new crown epidemic. In addition, corporate demand for labor is strong, but falling stock prices may not dampen corporate demand for labor in the near term. The Fed will not stop the pace of tightening until the economy cools and stocks may experience more pain.</p><p>Nick Mancini, head of research at cryptocurrency sentiment analysis platform Trade The Chain, said, &quot;Any guidance from the FOMC that doesn&apos;t include a 75 basis point rate hike is good news for cryptocurrencies and stocks. We believe that the market has digested the expectation that rate hikes of 25 bps and 50 bps will continue in 2022. This brings certainty to the market, which in turn triggers bullish price action.&quot;</p><p>Analyst @tedtalksmacro says the market has fully digested the target range of 50 bps to 75-100 bps of rate hikes that the Fed is offering before the meeting begins. Sell the rumor, buy the news staged. He said the meeting was good for the bulls - BTC and U.S. stocks are heading to range highs. U.S. stocks will bottom in the next 2 - 3 weeks before the June CPI data is released. At the same time, he noted that the Fed&apos;s initiation of progressive tapering is a huge headwind for risky assets. Reduced market liquidity will make it difficult for a significant rally in risky assets to occur. Until inflation proves to have topped out, we can expect markets to shake out in a range, rather than a sustained burst. *** Translated with <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="http://www.DeepL.com/Translator">www.DeepL.com/Translator</a> (free version) ***</p>]]></content:encoded>
            <author>tiptap@newsletter.paragraph.com (taptap)</author>
            <enclosure url="https://storage.googleapis.com/papyrus_images/3fb07261fd54bfd852577d94ca0418d4c5fe251a839e26b0eae1aac531a4b71d.jpg" length="0" type="image/jpg"/>
        </item>
    </channel>
</rss>