Twitter Space at 10 PM SG time every night Tg group: https://t.me/DefiNightclub
Twitter Space at 10 PM SG time every night Tg group: https://t.me/DefiNightclub
Subscribe to Rony_Z
Subscribe to Rony_Z
Share Dialog
Share Dialog
<100 subscribers
<100 subscribers
Host:@Jinbin Xie@Huobi Ventures
Date:20220421 Recorder:@RichardNnnz @Megalodon
Note:Not Financial Advice,Risk Your Own Money
MarketCap 2.06万亿,上涨0.5%; 24小时交易量1131.5亿,这几天交易量都很大,千亿以上;* BTC市占比39.3%,ETH市占比18.4%; 近几日都是大币种在涨,主要表现为BTC上涨2.6%,以太坊涨的相对较少。前期比较强势的小币种回调是比较普遍的,有一些事件驱动的小币种出现了上涨,但是这个是非常明显的事件驱动的;几个老公链有一定的涨幅出现,但都不是特别猛 BTC先于ETH,整体对小币不是特别好的事情,但是大盘上蹿下跳,宽幅震荡这个事情大家还是有所预期的,属于意料之内。尤其是5月3日的会议前,美联储不会再有理事出来发言,有一个静默期。 *所以两周的短暂行情应该没有太多的新发言的干扰,但万一再多一两个大的美股科技公司业绩暴雷还是非常的不安全的。NFLX暴雷对大盘来说不能说没有影响,但是影响确实也不是特别的大,这两天特斯拉的业绩特别好,整体情绪也还不错,所以在大盘上并没有拖累太多。
孙哥要在波场上发行算稳币;
EOS非常努力在做DAO的事情,之前是EVM compatibility;
Monero7月做升级分叉,确定时间点之后涨的还可以,给出了明确的东西;
特斯拉Q1业绩超预期* *特斯拉这两天涨的比较好,因为Q1的业绩超预期的原因。Q1业绩超预期这个问题可以认为是之前已经明牌的事情,从新能源汽车销量的榜单来看,这种数据早于企业的一季度财报的出现,所以特斯拉Q1业绩这个事情是相对能看的到的
DeFi Wonderland 提出Sushiswap重整提案 DeFi Wonderland提出完全重构SushiSwap的系统,废除xSushi的治理机制,不太确定他们想要通过什么样的方式来推动这个提议。感觉Sushi已经好久好久没有动作了,不知道能折腾出什么花样来* *注:Defi Wonderland 推 Sushiswap 重整提案,弃用 xSUSHI 质押改用 veSUSHI 模式。相关链接:http s://forum.sushi.com/t/fukkatsu-make-sushi-great-again/10030
0x Protocol宣布其为Coinbase NFT社交市场提供了支持* *0x昨天上涨了很多,是因为0x Protocol宣布其为Coinbase NFT社交市场提供了支持。团队做了外包型工作,可以理解市场炒这样的主题概念,但是没有涉及赋能代币,如何赋能代币这些问题,所以看起来 (对代币经济)似乎没有什么作用
NFT分发场景,NFT推荐算法是不是伪场景 “有个字节跳动的朋友觉得NFT是商品加信息,可以抓取OpenSea上的NFT,用推荐算法进行二次分发,进行web2电商式经营” 似乎大家目前更多的买NFT是投机,比如以KOL的宣传和持有,Mint数据等作为交易判断依据; 背后是品牌包装和游资炒作,那么把蓝筹的NFT、未被发掘的NFT进行推荐是不是一个伪命题?因为80% 的交易集中在头部的20%的NFT,因此想要探讨一下* 近日广泛推荐Twitterscan,看每个NFT背后有哪些KOL买了;去中心化世界里大家缺少信任抓手,KOL顺势成为散户的抓手 Unica:对于没有经历过web3世界的场外人士来说,建议放弃这种尝试。可以投资1w美金来OpenSea 购买NFT试水,去验证一下这个需求;无法说服web2的人的时候,可以让他们试着亏点钱,越聪明亏钱越快,亏钱越快就明白的越快,就转过弯来了 Oar:从上游到下游,从身份层到数据层,很多设施并不支持这样的算法推荐;其次推荐这个事情在当前的Web3.0发展情况下,是不是有效的需求需要打上一个大大的问号 Oar:不只是Web2.0的创业者,投资人也有很多理论,喜欢把Web2.0的内容特别生硬的平移进入 Web3.0,一整套所谓的Web2.0卷出来的理论作为支撑;大部分这样的产品接触下来感觉成功概率不是很大,因为Web3.0现有的基础设施并不太支持这样的比较生硬的平移;那如果把Web2.0的理论和场景迁移过来,光从技术上来说就有很多不一样的点; Web3.0的商业模式,生产关系,项目方和用户的关系 和结构都和Web2.0不太一样,里面的门道很多的。 *Jinbin Xie@Huobi Ventures:是的,习惯性思维,把历史习惯的逻辑强行嵌套进来,具体场景具体分析,不是线性平移就能解决问题的;
模块化公链* Jinbin Xie@Huobi Ventures:最近在看模块化公链的方向,看到一个很厉害的Paper。目前每一个区块都是由接收和发送组成,这个Paper设计是把每个区块的接收和发送都是分离的,分离的好处在于相当于开发中把业务逻辑拆分开,可以更灵活的进行组合和调配。相当于把消息的接受和发送进行拆分模块化,中间由gossip network沟通。paper里面有个理论叫做最终一致性和全一致性,全一致性就相当于 一个节点消息发出后网络上的节点全部触达,最终一致性就是刚开始可能只触达几个节点,最终能够把 网络里的所有Note都能遍历到。接受点和发送点是分离的,中间每个区块都会生成一个Proof π,在传 递过程中想要还原完整的区块的话,只需要把Proof收集起来,可以还原一个真的区块 目前论文还没有完全看完,预测:区块链底层模块化是大趋势 *其他实现模块化:分片技术。原Diem贡献了很多做项目的人才,起码贡献了4个项目 Celestial现在估值10亿 注:论文链接:*https://eprint.iacr.org/2022/455.pdf *相关笔记:The Proof of Availability & Retrieval Problem In a nutshell, the PoA&R problem detaches the act of “sending” a block from the part in which nodes “receive” it. Thus, a significant amount of the costs is transferred from the critical path to a time of the recipient’s choice. To do so, each block is translated into a (short) proof π and when a node aims to inform the network of a new block of information (or transactions in our blockchain example) it disseminates π instead of the actual block. This can be done, for example, by broadcasting π, which is cheaper than broadcasting the block itself when using an efficient proof generator. A node that receives π stores it and is essentially convinced that when the actual block is needed it will be retrievable.2
To obtain the block itself, processes can retrieve it at their own time. In this sub-protocol they reconstruct the initial block, using the stored proof π. Since we alleviate the costs of dispersing the block’s evidence into the network, the act of retrieving the block must incur additional costs. However, this kind of paradigm equips systems designers with the flexibility to decide when to undertake such costs. Specifically in blockchains systems, in times of congestion processes can progress by making consensus decisions on proofs alone, whereas the block retrieval and execution can be updated when the load decreases. In our proposed solution, the creation of the proof π is done using an erasure code scheme and a vector commitment scheme. When a process aims to share a block, it uses erasure coding to create a vector of n code words. It then creates a commitment that binds each word to the entire vector and sends each word (together with the commitment) to a different process. Processes that receive a commitment return a signature to the sender. Once the sender collects “enough” signatures, it forms the proof π that the block can be reconstructed. This is the basic “push” part in several AVID protocols [13, 20, 33]. In existing AVID protocols, retrieving the block (corresponding to the proof π) from the network is done via collecting a large number of code words and reconstructing the block. This might be too costly in large-scale systems. Instead, for the retrieval part, we propose a randomized solution that is deterministically safe and provides liveness with probability 1. Our proposed protocol incorporates vector reconstruction with random sampling. That is, a process that attempts to retrieve a block, occasionally samples a random subset of processes and asks them for the block. Clearly, when processes first try to retrieve the block, the creator of the block is the only process that knows it, thus, more communication rounds are needed. However, the spread of information is typically very fast. This intuitive claim is formally proved in Section 5. Moreover, we analyze different sample sizes that allow for different trade-offs in the cost structure. Main contributions:We formalize a modular architecture for the design
END 🔚
Host:@Jinbin Xie@Huobi Ventures
Date:20220421 Recorder:@RichardNnnz @Megalodon
Note:Not Financial Advice,Risk Your Own Money
MarketCap 2.06万亿,上涨0.5%; 24小时交易量1131.5亿,这几天交易量都很大,千亿以上;* BTC市占比39.3%,ETH市占比18.4%; 近几日都是大币种在涨,主要表现为BTC上涨2.6%,以太坊涨的相对较少。前期比较强势的小币种回调是比较普遍的,有一些事件驱动的小币种出现了上涨,但是这个是非常明显的事件驱动的;几个老公链有一定的涨幅出现,但都不是特别猛 BTC先于ETH,整体对小币不是特别好的事情,但是大盘上蹿下跳,宽幅震荡这个事情大家还是有所预期的,属于意料之内。尤其是5月3日的会议前,美联储不会再有理事出来发言,有一个静默期。 *所以两周的短暂行情应该没有太多的新发言的干扰,但万一再多一两个大的美股科技公司业绩暴雷还是非常的不安全的。NFLX暴雷对大盘来说不能说没有影响,但是影响确实也不是特别的大,这两天特斯拉的业绩特别好,整体情绪也还不错,所以在大盘上并没有拖累太多。
孙哥要在波场上发行算稳币;
EOS非常努力在做DAO的事情,之前是EVM compatibility;
Monero7月做升级分叉,确定时间点之后涨的还可以,给出了明确的东西;
特斯拉Q1业绩超预期* *特斯拉这两天涨的比较好,因为Q1的业绩超预期的原因。Q1业绩超预期这个问题可以认为是之前已经明牌的事情,从新能源汽车销量的榜单来看,这种数据早于企业的一季度财报的出现,所以特斯拉Q1业绩这个事情是相对能看的到的
DeFi Wonderland 提出Sushiswap重整提案 DeFi Wonderland提出完全重构SushiSwap的系统,废除xSushi的治理机制,不太确定他们想要通过什么样的方式来推动这个提议。感觉Sushi已经好久好久没有动作了,不知道能折腾出什么花样来* *注:Defi Wonderland 推 Sushiswap 重整提案,弃用 xSUSHI 质押改用 veSUSHI 模式。相关链接:http s://forum.sushi.com/t/fukkatsu-make-sushi-great-again/10030
0x Protocol宣布其为Coinbase NFT社交市场提供了支持* *0x昨天上涨了很多,是因为0x Protocol宣布其为Coinbase NFT社交市场提供了支持。团队做了外包型工作,可以理解市场炒这样的主题概念,但是没有涉及赋能代币,如何赋能代币这些问题,所以看起来 (对代币经济)似乎没有什么作用
NFT分发场景,NFT推荐算法是不是伪场景 “有个字节跳动的朋友觉得NFT是商品加信息,可以抓取OpenSea上的NFT,用推荐算法进行二次分发,进行web2电商式经营” 似乎大家目前更多的买NFT是投机,比如以KOL的宣传和持有,Mint数据等作为交易判断依据; 背后是品牌包装和游资炒作,那么把蓝筹的NFT、未被发掘的NFT进行推荐是不是一个伪命题?因为80% 的交易集中在头部的20%的NFT,因此想要探讨一下* 近日广泛推荐Twitterscan,看每个NFT背后有哪些KOL买了;去中心化世界里大家缺少信任抓手,KOL顺势成为散户的抓手 Unica:对于没有经历过web3世界的场外人士来说,建议放弃这种尝试。可以投资1w美金来OpenSea 购买NFT试水,去验证一下这个需求;无法说服web2的人的时候,可以让他们试着亏点钱,越聪明亏钱越快,亏钱越快就明白的越快,就转过弯来了 Oar:从上游到下游,从身份层到数据层,很多设施并不支持这样的算法推荐;其次推荐这个事情在当前的Web3.0发展情况下,是不是有效的需求需要打上一个大大的问号 Oar:不只是Web2.0的创业者,投资人也有很多理论,喜欢把Web2.0的内容特别生硬的平移进入 Web3.0,一整套所谓的Web2.0卷出来的理论作为支撑;大部分这样的产品接触下来感觉成功概率不是很大,因为Web3.0现有的基础设施并不太支持这样的比较生硬的平移;那如果把Web2.0的理论和场景迁移过来,光从技术上来说就有很多不一样的点; Web3.0的商业模式,生产关系,项目方和用户的关系 和结构都和Web2.0不太一样,里面的门道很多的。 *Jinbin Xie@Huobi Ventures:是的,习惯性思维,把历史习惯的逻辑强行嵌套进来,具体场景具体分析,不是线性平移就能解决问题的;
模块化公链* Jinbin Xie@Huobi Ventures:最近在看模块化公链的方向,看到一个很厉害的Paper。目前每一个区块都是由接收和发送组成,这个Paper设计是把每个区块的接收和发送都是分离的,分离的好处在于相当于开发中把业务逻辑拆分开,可以更灵活的进行组合和调配。相当于把消息的接受和发送进行拆分模块化,中间由gossip network沟通。paper里面有个理论叫做最终一致性和全一致性,全一致性就相当于 一个节点消息发出后网络上的节点全部触达,最终一致性就是刚开始可能只触达几个节点,最终能够把 网络里的所有Note都能遍历到。接受点和发送点是分离的,中间每个区块都会生成一个Proof π,在传 递过程中想要还原完整的区块的话,只需要把Proof收集起来,可以还原一个真的区块 目前论文还没有完全看完,预测:区块链底层模块化是大趋势 *其他实现模块化:分片技术。原Diem贡献了很多做项目的人才,起码贡献了4个项目 Celestial现在估值10亿 注:论文链接:*https://eprint.iacr.org/2022/455.pdf *相关笔记:The Proof of Availability & Retrieval Problem In a nutshell, the PoA&R problem detaches the act of “sending” a block from the part in which nodes “receive” it. Thus, a significant amount of the costs is transferred from the critical path to a time of the recipient’s choice. To do so, each block is translated into a (short) proof π and when a node aims to inform the network of a new block of information (or transactions in our blockchain example) it disseminates π instead of the actual block. This can be done, for example, by broadcasting π, which is cheaper than broadcasting the block itself when using an efficient proof generator. A node that receives π stores it and is essentially convinced that when the actual block is needed it will be retrievable.2
To obtain the block itself, processes can retrieve it at their own time. In this sub-protocol they reconstruct the initial block, using the stored proof π. Since we alleviate the costs of dispersing the block’s evidence into the network, the act of retrieving the block must incur additional costs. However, this kind of paradigm equips systems designers with the flexibility to decide when to undertake such costs. Specifically in blockchains systems, in times of congestion processes can progress by making consensus decisions on proofs alone, whereas the block retrieval and execution can be updated when the load decreases. In our proposed solution, the creation of the proof π is done using an erasure code scheme and a vector commitment scheme. When a process aims to share a block, it uses erasure coding to create a vector of n code words. It then creates a commitment that binds each word to the entire vector and sends each word (together with the commitment) to a different process. Processes that receive a commitment return a signature to the sender. Once the sender collects “enough” signatures, it forms the proof π that the block can be reconstructed. This is the basic “push” part in several AVID protocols [13, 20, 33]. In existing AVID protocols, retrieving the block (corresponding to the proof π) from the network is done via collecting a large number of code words and reconstructing the block. This might be too costly in large-scale systems. Instead, for the retrieval part, we propose a randomized solution that is deterministically safe and provides liveness with probability 1. Our proposed protocol incorporates vector reconstruction with random sampling. That is, a process that attempts to retrieve a block, occasionally samples a random subset of processes and asks them for the block. Clearly, when processes first try to retrieve the block, the creator of the block is the only process that knows it, thus, more communication rounds are needed. However, the spread of information is typically very fast. This intuitive claim is formally proved in Section 5. Moreover, we analyze different sample sizes that allow for different trade-offs in the cost structure. Main contributions:We formalize a modular architecture for the design
END 🔚
No activity yet