The potential for collective agenda-setting through iterated ranked-choice voting protocols
From Public Choice — A Primer by Eamonn Butler Voting Paradoxes in Action Indeed, as Condorcet pointed out, some systems could produce almost any result. Rock might lose to Paper, and Paper to Scissors, but Scissors would still be defeated by Rock. The outcome depends on how the election is ...
Digital land is not land
The potential for collective agenda-setting through iterated ranked-choice voting protocols
From Public Choice — A Primer by Eamonn Butler Voting Paradoxes in Action Indeed, as Condorcet pointed out, some systems could produce almost any result. Rock might lose to Paper, and Paper to Scissors, but Scissors would still be defeated by Rock. The outcome depends on how the election is ...
Digital land is not land
Share Dialog
Share Dialog
Subscribe to taotechaching
Subscribe to taotechaching
<100 subscribers
<100 subscribers
The “collective exocortex” is a term I’m using as a placeholder for an idea that I predict will begin with the benign interlinking of individuals’ personal knowledgebases but might eventually evolve to encompass all shared human knowledge mediated by intelligent software agents. All I’m saying with the latter part of the explanation is that I’m expecting to be able to walk down a street in Seoul, say “What’s the deal here?” to my airpods, and expect social information about the local neighborhood to be communicated to me via AR glasses and airpods. Such things like a blanket recommendation from a close friend that all her close friends “should talk to a bartender named Jamie in that basement bar” (AR glasses highlight where the bar is) or pointing out where to buy facial cremes in the area that some friends have expressed interest in previously or a geotagged video of a friend puking outside of a nearby bar (a video that was shared amongst your friend group a couple of months ago) overlaid on your AR display onto exactly where it happened. The key phrase is social information, not just information.
Taking a step back, the term “exocortex” is defined by a Transhumanism Wiki entry [Wikipedia’s entry for exocortex redirects to their article for Brain-computer interface] like thus:
An exocortex is an external information processing system that augments the brain's biological high-level cognitive processes.
Charles Stross’s Accelerando (2005) is credited with leading the adoption of the term in sci-fi circles — of the 12 mentions of the term in the book, I believe this one best illustrates the concept:
About ten billion humans are alive in the solar system, each mind surrounded by an exocortex of distributed agents, threads of personality spun right out of their heads to run on the clouds of utility fog – infinitely flexible computing resources as thin as aerogel – in which they live. The foggy depths are alive with high-bandwidth sparkles; most of Earth's biosphere has been wrapped in cotton wool and preserved for future examination. For every living human, a thousand million software agents carry information into the farthest corners of the consciousness address space.
Accelerando is a story mainly set in a post-Singularity universe and the descriptions are therefore wholly unrelatable for regular people living in our present day, present time. So when I use the term “exocortex”, what I’m referring to is something far more pedestrian and possible than floating clouds of computronium — if you’re able to ask your iPhone “What was that thing Dave said about trying new spices in my homemade pasta sauce?” and then get an audio response in your airpods, your iPhone and airpods constitutes an exocortex in my use of the word. I view this interaction as a different thing from asking Siri/Alexa “What are ingredients for ‘basil tomato sauce’ on allrecipes.com” in the same way I view a cluster of interlinked personal knowledgebases to be different from Wikipedia.
People with Alexa devices at home have been able to ask the voice assistant to define topics and concepts for at least a couple of years now [I’ve done this maybe 3 times over the past couple of years, but I mainly use Alexa to tell me the weather and to turn the lights on/off] so it won’t be long before people will be able to get information for a tomato sauce recipe via voice assistant — I suspect some of the recent advances in LLMs are being productized into next-gen voice assistant products by Amazon/Apple/Google/Microsoft as I write this, such that I’ll be able to ask contextual follow-up questions like “What other herbs do I need to add besides basil?” in a conversational manner. But even this wouldn’t constitute a true exocortex, for I believe an exocortex needs to integrate personalized and social information in order to truly serve as an external brain.
Having level-set, here are some predictions I have:
personal knowledgebases will become increasingly common:
instead of writing 12 years worth of school notes on physical notebooks that end up being put in dusty crates, students will increasingly get used to using their own searchable knowledgebase (like Notion, Roam Research, Obsidian, etc.) to store and connect their notes
younger cohorts will increasingly use their knowledgebases to organize information beyond their school notes (the #notion tag on Tiktok has ~450 million views at the time of my writing this, explore the page to some of the trending uses)
according to Forbes, Notion reported 20 million users as of late 2021, up from 4 million users in April of 2020; investors who provided funding to Notion at a 10 billion valuation late last year ostensibly have an expectation that the user count continues to grow at a rapid pace
assemblages of personal knowledgebases will form into collectives knowledgebases and these knowledge networks will converge with social networks:
tangibly speaking, the formation of collective knowledgebases will be something as simple as Notion allowing different workspaces to link with each other such that you can Ctrl+K (access the command palette) and search through multiple workspaces [i.e., those that your account has been given access to and that you are choosing to search through]; permissioning will be important when it comes to connecting different workspaces, because people will only want to keep different sections of their workspaces private keeping on the social context [i*.e., you might share your “notes” section of your Notion with every one in your class but only share your “favorite movies” page only with your friend group*]
Microsoft Loop, Almanac, Coda, and Clarity are among the various collaboration-first knowledgebases that have emerged in the wake of the massive demand for remote collaboration tools sparked by COVID; an entire class of knowledge workers are internalizing the concept and practice of community wikis
Some more examples to make the concept of a “collective exocortex” tangible:
being able to say “Find me that popular tweet about cats that I liked in the middle of last year” and a shortlist of tweets showing up on your phone
saying “What path did Jeremy follow on his tour of this neighborhood?” and being able to see the path (via highlighted trails that show up on phone and/or AR display) that your friend took in a foreign city and made public for reference by his friends
querying for the biggest area of disagreement within your social network in the past week and allowing for an intelligent agent to run semantic and sentiment analysis on friends’ knowledgebases that you have been given access to
asking “Where should I eat?” or “What should I watch on Netflix?” and being able to access socially curated recommendations aggregated from the rankings that your friends have given to restaurants, movies, products, services, etc.
being able to see a knowledge graph of what all of my friends anf friends of friends [you’ll be able to set permissions like “friends of my friends have XYZ level of access” for your knowledgebase] have noted down about some given subject
The “collective exocortex” is a term I’m using as a placeholder for an idea that I predict will begin with the benign interlinking of individuals’ personal knowledgebases but might eventually evolve to encompass all shared human knowledge mediated by intelligent software agents. All I’m saying with the latter part of the explanation is that I’m expecting to be able to walk down a street in Seoul, say “What’s the deal here?” to my airpods, and expect social information about the local neighborhood to be communicated to me via AR glasses and airpods. Such things like a blanket recommendation from a close friend that all her close friends “should talk to a bartender named Jamie in that basement bar” (AR glasses highlight where the bar is) or pointing out where to buy facial cremes in the area that some friends have expressed interest in previously or a geotagged video of a friend puking outside of a nearby bar (a video that was shared amongst your friend group a couple of months ago) overlaid on your AR display onto exactly where it happened. The key phrase is social information, not just information.
Taking a step back, the term “exocortex” is defined by a Transhumanism Wiki entry [Wikipedia’s entry for exocortex redirects to their article for Brain-computer interface] like thus:
An exocortex is an external information processing system that augments the brain's biological high-level cognitive processes.
Charles Stross’s Accelerando (2005) is credited with leading the adoption of the term in sci-fi circles — of the 12 mentions of the term in the book, I believe this one best illustrates the concept:
About ten billion humans are alive in the solar system, each mind surrounded by an exocortex of distributed agents, threads of personality spun right out of their heads to run on the clouds of utility fog – infinitely flexible computing resources as thin as aerogel – in which they live. The foggy depths are alive with high-bandwidth sparkles; most of Earth's biosphere has been wrapped in cotton wool and preserved for future examination. For every living human, a thousand million software agents carry information into the farthest corners of the consciousness address space.
Accelerando is a story mainly set in a post-Singularity universe and the descriptions are therefore wholly unrelatable for regular people living in our present day, present time. So when I use the term “exocortex”, what I’m referring to is something far more pedestrian and possible than floating clouds of computronium — if you’re able to ask your iPhone “What was that thing Dave said about trying new spices in my homemade pasta sauce?” and then get an audio response in your airpods, your iPhone and airpods constitutes an exocortex in my use of the word. I view this interaction as a different thing from asking Siri/Alexa “What are ingredients for ‘basil tomato sauce’ on allrecipes.com” in the same way I view a cluster of interlinked personal knowledgebases to be different from Wikipedia.
People with Alexa devices at home have been able to ask the voice assistant to define topics and concepts for at least a couple of years now [I’ve done this maybe 3 times over the past couple of years, but I mainly use Alexa to tell me the weather and to turn the lights on/off] so it won’t be long before people will be able to get information for a tomato sauce recipe via voice assistant — I suspect some of the recent advances in LLMs are being productized into next-gen voice assistant products by Amazon/Apple/Google/Microsoft as I write this, such that I’ll be able to ask contextual follow-up questions like “What other herbs do I need to add besides basil?” in a conversational manner. But even this wouldn’t constitute a true exocortex, for I believe an exocortex needs to integrate personalized and social information in order to truly serve as an external brain.
Having level-set, here are some predictions I have:
personal knowledgebases will become increasingly common:
instead of writing 12 years worth of school notes on physical notebooks that end up being put in dusty crates, students will increasingly get used to using their own searchable knowledgebase (like Notion, Roam Research, Obsidian, etc.) to store and connect their notes
younger cohorts will increasingly use their knowledgebases to organize information beyond their school notes (the #notion tag on Tiktok has ~450 million views at the time of my writing this, explore the page to some of the trending uses)
according to Forbes, Notion reported 20 million users as of late 2021, up from 4 million users in April of 2020; investors who provided funding to Notion at a 10 billion valuation late last year ostensibly have an expectation that the user count continues to grow at a rapid pace
assemblages of personal knowledgebases will form into collectives knowledgebases and these knowledge networks will converge with social networks:
tangibly speaking, the formation of collective knowledgebases will be something as simple as Notion allowing different workspaces to link with each other such that you can Ctrl+K (access the command palette) and search through multiple workspaces [i.e., those that your account has been given access to and that you are choosing to search through]; permissioning will be important when it comes to connecting different workspaces, because people will only want to keep different sections of their workspaces private keeping on the social context [i*.e., you might share your “notes” section of your Notion with every one in your class but only share your “favorite movies” page only with your friend group*]
Microsoft Loop, Almanac, Coda, and Clarity are among the various collaboration-first knowledgebases that have emerged in the wake of the massive demand for remote collaboration tools sparked by COVID; an entire class of knowledge workers are internalizing the concept and practice of community wikis
Some more examples to make the concept of a “collective exocortex” tangible:
being able to say “Find me that popular tweet about cats that I liked in the middle of last year” and a shortlist of tweets showing up on your phone
saying “What path did Jeremy follow on his tour of this neighborhood?” and being able to see the path (via highlighted trails that show up on phone and/or AR display) that your friend took in a foreign city and made public for reference by his friends
querying for the biggest area of disagreement within your social network in the past week and allowing for an intelligent agent to run semantic and sentiment analysis on friends’ knowledgebases that you have been given access to
asking “Where should I eat?” or “What should I watch on Netflix?” and being able to access socially curated recommendations aggregated from the rankings that your friends have given to restaurants, movies, products, services, etc.
being able to see a knowledge graph of what all of my friends anf friends of friends [you’ll be able to set permissions like “friends of my friends have XYZ level of access” for your knowledgebase] have noted down about some given subject
Golden Protocol is enabling the creation of a “decentralized canonical knowledge graph” and has so far raised a total of $19.5M of funding from a list of VCs that include a16z and Founder’s Fund; Golden Protocol is the “web3 version” of Golden.com, which contains products like their “Knowledge Graph”, “Query Tool”, “Research Requests”, and “Knowledge Storage”; one of their products allows for you “Setup alerts for the topics you are tracking and get notified of new topics that match your rules” [e.g., IF a “New” “Quantum computing company” is “Added” THEN send an alert to Alice and Bob]
using AI/ML and visualization tools to automatically organize and intuitively interface with personal/social information:
the most often cited friction point for using knowledge management tools is the effort and time required in organizing and properly linking concepts/links in their rightful place, a difficulty compounded when that “rightful place” changes and categories/tags require updating; AI/ML will eventually be able to organize information automatically through some combination of learning about how most people manually organize certain tags and by utilizing reinforcement learning to personalize this organization to our preferences
Obsidian’s Graph view lets you see an interactive graph of where you are situated within your overall knowledge graph — we should be able to extrapolate this feature and expect that we will one day be able to visualize knowledgebases in a 3D graph view and intuit conceptual distances between different entries
the reading/writing of data into these knowledgebases will become multi-modal; friction between uploading and accessing information into and out of these knowledgebases will be an actively pursued UX problem that is aided by AI/ML that is trained to better understand intent [e.g., they will learn how to parse contextual requests like “Show me the dinner bill from last week” as well as learn from statements intended to help them learn like “This is MY room” (pointing at a partiular space with a camera with geolocation enabled)]
miscellaneous:
interoperability, compatability, and ease of im/exporting content will factor how easily disparate personal and communal knowledge can become networked together to create collective knowledgebases
cryptographic timestamping of entries will become an automatable option for people who want to be able to verifiably prove that they had a particular idea at given points in time; personally, the primary utility of publishing tools like Paragraph.xyz and Mirror.xyz is that publishing on those platforms also publishes an Arweave transaction, thereby cryptographically timestamping the article
it’s likely that tokengating mechanisms will be implemented to allow workspace owners to grant/revoke permissions from different parties; if this is the case and also that increasing proportions of online information is being created within these permissioned gardens, then webcrawler-based methods (i.e., Google) of indexing information on the web will become increasingly obsolete as they will not have access to private and semi-private channels of information; a “Dark Forest” of knowledgebases will make it hard for webcrawlers tagreting training data for AI/ML models to aggregate valuable data from internet users and give users leverage in charging for access to the content that they produce
Golden Protocol is enabling the creation of a “decentralized canonical knowledge graph” and has so far raised a total of $19.5M of funding from a list of VCs that include a16z and Founder’s Fund; Golden Protocol is the “web3 version” of Golden.com, which contains products like their “Knowledge Graph”, “Query Tool”, “Research Requests”, and “Knowledge Storage”; one of their products allows for you “Setup alerts for the topics you are tracking and get notified of new topics that match your rules” [e.g., IF a “New” “Quantum computing company” is “Added” THEN send an alert to Alice and Bob]
using AI/ML and visualization tools to automatically organize and intuitively interface with personal/social information:
the most often cited friction point for using knowledge management tools is the effort and time required in organizing and properly linking concepts/links in their rightful place, a difficulty compounded when that “rightful place” changes and categories/tags require updating; AI/ML will eventually be able to organize information automatically through some combination of learning about how most people manually organize certain tags and by utilizing reinforcement learning to personalize this organization to our preferences
Obsidian’s Graph view lets you see an interactive graph of where you are situated within your overall knowledge graph — we should be able to extrapolate this feature and expect that we will one day be able to visualize knowledgebases in a 3D graph view and intuit conceptual distances between different entries
the reading/writing of data into these knowledgebases will become multi-modal; friction between uploading and accessing information into and out of these knowledgebases will be an actively pursued UX problem that is aided by AI/ML that is trained to better understand intent [e.g., they will learn how to parse contextual requests like “Show me the dinner bill from last week” as well as learn from statements intended to help them learn like “This is MY room” (pointing at a partiular space with a camera with geolocation enabled)]
miscellaneous:
interoperability, compatability, and ease of im/exporting content will factor how easily disparate personal and communal knowledge can become networked together to create collective knowledgebases
cryptographic timestamping of entries will become an automatable option for people who want to be able to verifiably prove that they had a particular idea at given points in time; personally, the primary utility of publishing tools like Paragraph.xyz and Mirror.xyz is that publishing on those platforms also publishes an Arweave transaction, thereby cryptographically timestamping the article
it’s likely that tokengating mechanisms will be implemented to allow workspace owners to grant/revoke permissions from different parties; if this is the case and also that increasing proportions of online information is being created within these permissioned gardens, then webcrawler-based methods (i.e., Google) of indexing information on the web will become increasingly obsolete as they will not have access to private and semi-private channels of information; a “Dark Forest” of knowledgebases will make it hard for webcrawlers tagreting training data for AI/ML models to aggregate valuable data from internet users and give users leverage in charging for access to the content that they produce
No activity yet