Herbert Simon, in the oft-quoted passage from the 1971 piece “Designing Organizations For An Information-Rich World”, foresaw how information overload would transform attention into one of the most valuable resources:
In an information-rich world, the wealth of information means a dearth of something else: a scarcity of whatever it is that information consumes. What information consumes is rather obvious: it consumes the attention of its recipients. Hence a wealth of information creates a poverty of attention and a need to allocate that attention efficiently among the overabundance of information sources that might consume it.
Over 50 years have passed and internet scale data has made this problem more acute than Simon could have ever imagined. Yet our research infrastructure does not reflect this reality. What research merits attention? What papers, trends and discussions should I, as a researcher, be paying attention to? For all the awareness of the problem, there is still a surprising lack of tools helping researchers answer these pressing questions. For most researchers, information overload results in a constant feeling of falling behind the exponential growth of knowledge in their field, too many open browser tabs and not enough ways to decide which ones to close and which to read.
Journals used to be the answer; traditionally, they were tasked with filtering out poor research and curating high quality work. However, journals have fundamentally failed to scale with the explosion of research output, often taking years to review and publish while knowledge evolves at internet speed. Moreover, commercial incentive structures have warped their filtering function: “what science gets done and how it is presented, has been distorted by what is profitable for publishers”.
Can’t we just ask Claude or ChatGPT? Despite remarkable recent advances, AI alone is not a solution either. To see why not, try to imagine Goodreads or IMDB reviews where all the reviews are AI generated. Though Mark Zuckerberg might not see any problem with that, for most of us, something would be missing. AI will certainly be part of the solution, synthesizing, aggregating and organizing information, but there is a fundamentally social aspect of attention that cannot and should not be automated. Shared attention is part of what makes us human. We care about what human peers and experts find interesting, not just what opaque algorithms recommend.
The social nature of attention is also attested to by the widespread use of social media by researchers. Ask a researcher how they find interesting papers to read, and more often than not, social media figures largely in their answer. And yet, social media is not designed for science; research content is buried in noisy feeds, platforms optimize for engagement and not sensemaking, data is siloed and fragmented, and the medium is not expressive enough for complex knowledge work. Despite its limitations, social media provides an important clue towards how we can overcome information overload.
To better understand the role of social media in guiding individual and collective attention, let's revisit Herbert Simon's statement of the information overload problem. In his account, the attention economy is linear: attention is a resource consumed by information, lost forever - like fossil fuels burned to generate energy. But what if we thought about attention differently? In contrast to a linear economy, a circular economy aims to create systems where resources are continually reused and recycled, minimizing waste and maximizing efficient resource utilization. The circular economy framing suggests a follow-up question to Simon: information consumes attention, but what does the consumption of information produce? In other words, what are the resulting artifacts from the process of attending to information?
Classic examples of attention artifacts are notes or annotations in personal knowledge tools like Obsidian, Notion or Zotero. Sometimes there may be no material "attention traces" - perhaps a paper wasn't worth finishing or taking notes on. And obviously, in some cases insights will be kept private. But more and more researchers are sharing insights publicly, for example as posts on social media recommending or critiquing something they recently read.
These short shared insights - public products of human attention - are like ant pheromone trails, crucial signals for shaping collective attention. They represent a more circular attention economy at work, where one person's consumed attention becomes a signal that guides others' attention allocation.
But such sharing is still few and far between. How much of what we read do we actually share publicly, even when privacy is not an issue? How much do we share in open and interoperable formats, vs sharing to platform siloes? Currently, most of our attention is consumed and not recycled into useful signals for the broader research community. Joel Chan, a researcher at the University of Maryland, talks about the "untapped creative exhaust" of millions of researchers already pouring over hundreds of millions of references - a vast repository of human sensemaking that remains largely invisible and un-utilized. A.J. Boston, a scholarly communications librarian at Murray State University, has also written extensively about the great (yet unfulfilled) promise of harnessing internet-scale energy for open micro-reviews: “Considering the web’s tendency for moderation and review, it’s honestly a little weird we don’t already have established practices in place to more systematically enable, capture, and display open comment on research objects”.
Imagine if digital attention sharing was more of the norm. The fellow in the meme would have access to a kind of “co-augmented reality glasses” to overlay their bookshelf, surfacing previously hidden trails of knowledge and insight shared by others:
The Goodreads mobile app demonstrates this kind of co-augmented reality at work. You point your phone at a book and pull up human reviews about it. This works impressively well, both in terms of the coverage and the quality of reviews. A kind of Goodreads for research, though it might differ in implementation details (e.g., a five star rating system may be overly simplistic), would clearly be incredibly valuable. AlphaXiv is an example of some great work being done in this direction of Science Goodreads, specifically for arXiv preprints.
The opportunity is clear: we need systems that can represent, aggregate, and redistribute the traces of human attention in ways that help the entire research community navigate information overload more effectively. This means building tools that make it easy and worthwhile for researchers to share their attention traces - whether that's a short review, an annotation, or a recommendation - and then intelligently routing those signals to others who would benefit from them.
Such a system would create positive feedback loops: the more researchers participate by sharing their attention traces, the better the system becomes at helping everyone find relevant, high-quality information. Social feedback would allow participants to see their contributions making a tangible difference, thus encouraging further sharing. Building this infrastructure according to FAIR (Findable, Accessible, Interoperable, Reusable) data principles also means that, rather than be captured in platform siloes, attention data can easily flow across the ecosystem to inform other content recommendation and discovery services.
It's a fundamentally different approach from both the traditional gatekeeping of journals as well as pure algorithmic recommendation - one that harnesses the collective intelligence of the research community while respecting the profoundly human aspects of attention and judgment.
There are still many open questions to address as we build this infrastructure:
Incentivizing participation: What tools and social mechanisms can make sharing attention traces feel rewarding rather than burdensome? How do we move beyond the current reality where most insights remain private?
Quality and moderation: How do we balance open, permissionless sharing with the need to maintain signal quality? What approaches can prevent gaming or manipulation while preserving the authentic human judgment that makes these traces valuable?
Interface design: Reviews and curation come in many forms—faceted reviews, star ratings, annotations, recommendations. What are the right schemas and interfaces that balance expressiveness with ease of use, avoiding both oversimplification and overwhelming complexity?
Aggregation and discovery: How do we synthesize thousands of individual attention traces into coherent signals that support effective navigation without creating new forms of information overload?
Data sovereignty and ownership: How can we preserve users' data sovereignty and provide an alternative to the extractive practices that dominate today's platforms, where user-generated content is harvested without meaningful consent or compensation? What models ensure that researchers retain control over their attention traces while still enabling collective benefit?
Navigating the perils and promise of the digital age requires, according to author James Williams, that “we give the right sort of attention to the right sort of things... A major function, if not the primary purpose, of information technology should be to advance this end.”
Building circular attention economies represents one promising path toward that goal - turning our attention from a scarce resource into a renewable one that compounds in value as more people participate.
We’re excited to be attending to attention and pursuing these questions at Cosmik. We also think the best way to address them is together, in community. In the next few months we will be exploring new ways to foster these discussions. Meanwhile, feel free to drop us a line or find us on social media!
Over 100 subscribers