retard
retard
Subscribe to Nox
Subscribe to Nox
<100 subscribers
<100 subscribers
Share Dialog
Share Dialog


When cell phones first came out, it was hard to imagine, without reference to science fiction, that they would overtake desktop computers as the primary platform for personal computing. Although we still call them phones, as a reference to the ruse by which they made their way into all of our pockets, actual phone calls are a dying practice. Not only have phones become the primary method of personal computing, but the modalities that they popularized—forced updates, inability to install arbitrary software, lack of user-serviceability, monetization of your data, all expressions of an underlying disempowerment and removal of sovereignty, have made their way back to “desktop” computing, which has otherwise not seen much in the way of revolution since Windows 3.1.
Given the lack of progress on the desktop operating system front, the encroachment of mobile modalities is hardly surprising. Windows and Mac OS look roughly the same as they ever have, and, while people crave new for the sake of new, they also hate changes to things that they already use. Desktop computing is a victim of its own legacy. This has provided the foundation for mobile computing to emerge as a new paradigm with no preconceptions about how it should work, and whose modalities have provided the groundwork for a complete removal of general purpose computing power from the people.
A personal computer, in the sense that I’m using it here, is a machine capable of general computation through basic operations on numbers that can be directed to perform arbitrary calculations by an individual. It can be difficult to see the operating system projected on your screen as the result of billions of math operations per second, but that’s what it is. Personal computing, as high speed math, is fundamentally general and uncensorable, but a computer is distinct from the operating system running on it, which serves as an interface between the raw power to compute and the ability to direct that power. In this way, operating systems are the gatekeeper to what most people can do with their computers, so to control the operating systems is to control the ability to compute.
While computer hardware is more powerful than ever, it’s tempered by the fact that software is both less efficient and less capable than ever. There are many reasons for this, but the 3 that stand out to me are:
Protecting users from themselves to facilitate a more inclusive computing environment. One surefire way to increase profits is to sell the same product to more people. The number of people with computers has grown and grown to the point that having a computer is basically a prerequisite for participating in society, but we don’t have different operating systems for different IQ’s and capability levels, so we’re all forced to use the same one catering to the lowest common denominator.
Extracting productivity from developers who don’t know what they’re doing. The prevalence of computers has created a massive need for people capable of working on software. What was once the realm of a highly motivated and skilled niche has become the realm of those that paid for a degree in the next big thing. When we couple this with modern universities’ role as factory farms rather than hubs of higher learning, most developers being popped out of these degree mills are not particularly useful. The introduction of high level languages and “frameworks” was an explicit effort to allow those that should never have been software developers to at least contribute in some small way to large corporate projects, but such tools result in highly inefficient code, and projects that are so convoluted that nobody knows how they work anymore. Despite a renewed focus on “testing” and metrics of quality within these frameworks, software is now slower and more broken than ever. Don’t take my word for it, just try to use anything—it won’t work. We used to have to ship things that worked because it wasn’t possible to update them, but high-speed Internet changed that. Now, it’s standard to ship broken things and force-update them every week. Most professional developers don’t understand how to write software outside of these frameworks anymore, and don’t understand the principles of the underlying hardware necessary to write efficient code, so we have no choice but to continue to build systems optimized for being worked on by retards, and force-updating them on a weekly basis.
Monetizing products with a marginal cost of 0. Digital goods and the Internet presented a unique situation for businesses. Physical products have a cost of the goods and labor that go into the production of each piece—the “price” of the item results from placing a small premium on top of the cost of these inputs. Since digital goods can be replicated infinitely at virtually 0 cost, with the cost of labor amortized over all of its copies, competition has driven the prevailing price for new digital products to 0. However, the need for companies to make money, more money than ever, didn’t go away (Meta has 78k employees.) Businesses shifted from selling things for a fixed price, to providing their products for “free” and/or selling subscriptions for either the right to use their product temporarily, or for an optional upgrade in an otherwise free application. These techniques are dependent on collecting data and tracking user behavior both in and outside of applications to optimize advertising and monetization pathways. Since data is the keystone of these new business models, it became important to maximize screen time, or “engagement”, from consumers, ensuring that they spend as much time on their computers as possible. This focus on exploiting basic human instincts in an effort to make a “lifestyle” out of an application is at odds with the design of “good software”, because software is now designed to be used in perpetuity, not to help you. If you fully solve someone’s problems they will stop using your product, and the flow of data will end.
So, the effects of more powerful hardware have been rendered moot by the increasingly worse software landscape, and the next step here, which we already see happening, is the movement of computation from the user’s local computer to the cloud. Personal computing will soon be dominated by “thin-clients”, computers whose sole purpose is to make requests of more powerful computers in the cloud and display their output. This, of course, will lead to what we’ve been primed for—a computing experience that is highly constrained, censored, monitored, and fully monetized. Consumers simply won’t need powerful computers anymore. Several generations have now grown up with no idea how to use a desktop computer; they don’t know what it means to be the master of their computing environment, to be able to install whatever software they want, to be free from forced updates and analytics, to pay once for software and own it forever, and this is why we’re already past the point of no return. We’re frogs in a slowly boiling pot, and even though you may have faint memories of how things used to be, the next generation has no idea. We can’t encourage young people to “use Linux.” They were parented by iPads, they’re barely computer literate, and it’s impossible to make a compelling case for the inconvenience of Linux without reference to conspiracy theory. Thin-clients will be the norm within 10 years, because the benefits they offer to producers are simply too great, and the proliferation of fiber networking will finally make it possible. Take a look at Mighty Browser, Chrome OS, Google Stadia, and Repl.it for a preview of what’s to come. You will never pay a single price for a piece of software again. You’ll subscribe to everything. You won’t store your files locally. You’ll store them all in the cloud where they can be scanned for wrongthink and easily shared between all of your thin-client devices, and you’ll be happy.
The best way to disempower someone is to get them to give up their power willingly by making the exchange transparent, convenient, and innocuous. By the time people realize what they’ve given up it will be too late to take it back without violence, and nobody wants violence, as long as they have Amazon Prime. This quirk of human nature makes it trivial to dismantle an entire culture, provided the dismantler has enough time to slowly bring the water to a boil. iCloud, once an optional cloud file backup service, is now on by default; all of your files are shared with Apple unless you specifically turn it off. Even if you do turn it off, iCloud is known for turning itself back on and copying your files to the cloud without permission. Microsoft, not to be outdone, has OneDrive, which works in the same way. This move to cloud filesystems has actually led to people no longer understanding where their files reside, because to them, the messy details have been washed away. Files live in an app, not on a filesystem.
The one opportunity that users had to expand the functionality of their operating system, third party software, is now gated by “app stores” as the default and mandatory method of installing software. Everything a user can install on their computer has been sanitized and manually approved for their consumption. Any downloads they make are tied to an account associated with their real identity, and then used to advertise things to them. All purchases have a fee skimmed off the top by the store, and developers have no choice but to distribute through the store, because it’s the only way to get software running on the operating system. At least users don’t have to worry about anything anymore, and at least they’re not in danger of installing potentially malicious software, such as anything that competes too directly with software developed by the service provider.
Now, you as a user have been completely disempowered, the ritual is complete—they have your files, you don’t remember what a file is, they have your compute, you can’t install anything, and if they stop sending you updates you’re fucked, because you can’t use anything without the most up to date system. The endgame is obvious: pay up or lose your limited ability to compute. If computing were purely voluntary this would be bad enough, but computing is a de facto requirement to participate in society—the dystopian “future” where tech corporations and governments are intertwined is now.
Getting everyone to voluntarily give up their privacy by carrying recording devices on themselves at all times and planting them throughout their homes was every authoritarian’s wet dream come true. The people’s active participation in the removal of their ability to personally compute is just another step on the same path. Desktop operating systems are succumbing to high cost/low functionality mobile modalities, while desktop computers themselves are becoming less and less common. Only gamers and developers have desktops anymore, but soon they too will succumb, and the last bastion of personal computing will fall. As cyber-warfare becomes the primary battleground of the next century, disarmament is inevitable. Especially in light of the emerging revolution in AI—the ability to utilize general computation for potentially anti-government activities, such as deep fakes, will necessitate the removal of the ability to personally compute from the citizenry, but the transition will be smooth and painless; the hard work is almost done already. I can’t blame the people for giving up their power, the people are retarded. The erosion of our ability to compute will eventually be codified into law and your fellow citizens will eagerly comply, because, “What do you need local storage for if you have nothing to hide?” or “What do you need such a powerful computer for if you’re not going to use it to make subversive deep fakes?”
The pendulum will eventually swing back in the other direction, but how much knowledge will be lost in the meantime? Loss of knowledge is one of those things that’s deviously hard to conceive of, but unless we’re uniquely different from every civilization that existed before ours, only a small minority of knowledge is ever preserved beyond its origin. If you’ve conducted a Google search in the past few years you will have noticed that you can’t find anything anymore, because Google started “optimizing” your search experience to protect you from disinformation, and, of course, to make more money. Google-Fu makes no difference. The web is no longer a tapestry of hyperlinked personal homepages and niche communities, but a highly concentrated advertising machine. Timeless longform content is out, aggregation of unsearchable new shortform content is in. You must scroll and scroll, and buy and buy, while they watch your every move. It just goes to show—it barely matters how powerful your hardware is, software controls what you can do with it. If search engine providers don’t want you to find something, you can’t find it, and if your operating system wants to force you to compute in a certain way, you eventually will.
The golden age of personal computing is over, and it’s going to be a long while before people realize the consequences of it, and even longer before they discover what they can do about it. What we need is mesh networks, rebel hardware, and rebel operating systems. Nascent movements for these things already exist, but nobody is going to be clamoring for them until the cute little Boston Dynamics dog with the cannon attached to its back is at their door.
When cell phones first came out, it was hard to imagine, without reference to science fiction, that they would overtake desktop computers as the primary platform for personal computing. Although we still call them phones, as a reference to the ruse by which they made their way into all of our pockets, actual phone calls are a dying practice. Not only have phones become the primary method of personal computing, but the modalities that they popularized—forced updates, inability to install arbitrary software, lack of user-serviceability, monetization of your data, all expressions of an underlying disempowerment and removal of sovereignty, have made their way back to “desktop” computing, which has otherwise not seen much in the way of revolution since Windows 3.1.
Given the lack of progress on the desktop operating system front, the encroachment of mobile modalities is hardly surprising. Windows and Mac OS look roughly the same as they ever have, and, while people crave new for the sake of new, they also hate changes to things that they already use. Desktop computing is a victim of its own legacy. This has provided the foundation for mobile computing to emerge as a new paradigm with no preconceptions about how it should work, and whose modalities have provided the groundwork for a complete removal of general purpose computing power from the people.
A personal computer, in the sense that I’m using it here, is a machine capable of general computation through basic operations on numbers that can be directed to perform arbitrary calculations by an individual. It can be difficult to see the operating system projected on your screen as the result of billions of math operations per second, but that’s what it is. Personal computing, as high speed math, is fundamentally general and uncensorable, but a computer is distinct from the operating system running on it, which serves as an interface between the raw power to compute and the ability to direct that power. In this way, operating systems are the gatekeeper to what most people can do with their computers, so to control the operating systems is to control the ability to compute.
While computer hardware is more powerful than ever, it’s tempered by the fact that software is both less efficient and less capable than ever. There are many reasons for this, but the 3 that stand out to me are:
Protecting users from themselves to facilitate a more inclusive computing environment. One surefire way to increase profits is to sell the same product to more people. The number of people with computers has grown and grown to the point that having a computer is basically a prerequisite for participating in society, but we don’t have different operating systems for different IQ’s and capability levels, so we’re all forced to use the same one catering to the lowest common denominator.
Extracting productivity from developers who don’t know what they’re doing. The prevalence of computers has created a massive need for people capable of working on software. What was once the realm of a highly motivated and skilled niche has become the realm of those that paid for a degree in the next big thing. When we couple this with modern universities’ role as factory farms rather than hubs of higher learning, most developers being popped out of these degree mills are not particularly useful. The introduction of high level languages and “frameworks” was an explicit effort to allow those that should never have been software developers to at least contribute in some small way to large corporate projects, but such tools result in highly inefficient code, and projects that are so convoluted that nobody knows how they work anymore. Despite a renewed focus on “testing” and metrics of quality within these frameworks, software is now slower and more broken than ever. Don’t take my word for it, just try to use anything—it won’t work. We used to have to ship things that worked because it wasn’t possible to update them, but high-speed Internet changed that. Now, it’s standard to ship broken things and force-update them every week. Most professional developers don’t understand how to write software outside of these frameworks anymore, and don’t understand the principles of the underlying hardware necessary to write efficient code, so we have no choice but to continue to build systems optimized for being worked on by retards, and force-updating them on a weekly basis.
Monetizing products with a marginal cost of 0. Digital goods and the Internet presented a unique situation for businesses. Physical products have a cost of the goods and labor that go into the production of each piece—the “price” of the item results from placing a small premium on top of the cost of these inputs. Since digital goods can be replicated infinitely at virtually 0 cost, with the cost of labor amortized over all of its copies, competition has driven the prevailing price for new digital products to 0. However, the need for companies to make money, more money than ever, didn’t go away (Meta has 78k employees.) Businesses shifted from selling things for a fixed price, to providing their products for “free” and/or selling subscriptions for either the right to use their product temporarily, or for an optional upgrade in an otherwise free application. These techniques are dependent on collecting data and tracking user behavior both in and outside of applications to optimize advertising and monetization pathways. Since data is the keystone of these new business models, it became important to maximize screen time, or “engagement”, from consumers, ensuring that they spend as much time on their computers as possible. This focus on exploiting basic human instincts in an effort to make a “lifestyle” out of an application is at odds with the design of “good software”, because software is now designed to be used in perpetuity, not to help you. If you fully solve someone’s problems they will stop using your product, and the flow of data will end.
So, the effects of more powerful hardware have been rendered moot by the increasingly worse software landscape, and the next step here, which we already see happening, is the movement of computation from the user’s local computer to the cloud. Personal computing will soon be dominated by “thin-clients”, computers whose sole purpose is to make requests of more powerful computers in the cloud and display their output. This, of course, will lead to what we’ve been primed for—a computing experience that is highly constrained, censored, monitored, and fully monetized. Consumers simply won’t need powerful computers anymore. Several generations have now grown up with no idea how to use a desktop computer; they don’t know what it means to be the master of their computing environment, to be able to install whatever software they want, to be free from forced updates and analytics, to pay once for software and own it forever, and this is why we’re already past the point of no return. We’re frogs in a slowly boiling pot, and even though you may have faint memories of how things used to be, the next generation has no idea. We can’t encourage young people to “use Linux.” They were parented by iPads, they’re barely computer literate, and it’s impossible to make a compelling case for the inconvenience of Linux without reference to conspiracy theory. Thin-clients will be the norm within 10 years, because the benefits they offer to producers are simply too great, and the proliferation of fiber networking will finally make it possible. Take a look at Mighty Browser, Chrome OS, Google Stadia, and Repl.it for a preview of what’s to come. You will never pay a single price for a piece of software again. You’ll subscribe to everything. You won’t store your files locally. You’ll store them all in the cloud where they can be scanned for wrongthink and easily shared between all of your thin-client devices, and you’ll be happy.
The best way to disempower someone is to get them to give up their power willingly by making the exchange transparent, convenient, and innocuous. By the time people realize what they’ve given up it will be too late to take it back without violence, and nobody wants violence, as long as they have Amazon Prime. This quirk of human nature makes it trivial to dismantle an entire culture, provided the dismantler has enough time to slowly bring the water to a boil. iCloud, once an optional cloud file backup service, is now on by default; all of your files are shared with Apple unless you specifically turn it off. Even if you do turn it off, iCloud is known for turning itself back on and copying your files to the cloud without permission. Microsoft, not to be outdone, has OneDrive, which works in the same way. This move to cloud filesystems has actually led to people no longer understanding where their files reside, because to them, the messy details have been washed away. Files live in an app, not on a filesystem.
The one opportunity that users had to expand the functionality of their operating system, third party software, is now gated by “app stores” as the default and mandatory method of installing software. Everything a user can install on their computer has been sanitized and manually approved for their consumption. Any downloads they make are tied to an account associated with their real identity, and then used to advertise things to them. All purchases have a fee skimmed off the top by the store, and developers have no choice but to distribute through the store, because it’s the only way to get software running on the operating system. At least users don’t have to worry about anything anymore, and at least they’re not in danger of installing potentially malicious software, such as anything that competes too directly with software developed by the service provider.
Now, you as a user have been completely disempowered, the ritual is complete—they have your files, you don’t remember what a file is, they have your compute, you can’t install anything, and if they stop sending you updates you’re fucked, because you can’t use anything without the most up to date system. The endgame is obvious: pay up or lose your limited ability to compute. If computing were purely voluntary this would be bad enough, but computing is a de facto requirement to participate in society—the dystopian “future” where tech corporations and governments are intertwined is now.
Getting everyone to voluntarily give up their privacy by carrying recording devices on themselves at all times and planting them throughout their homes was every authoritarian’s wet dream come true. The people’s active participation in the removal of their ability to personally compute is just another step on the same path. Desktop operating systems are succumbing to high cost/low functionality mobile modalities, while desktop computers themselves are becoming less and less common. Only gamers and developers have desktops anymore, but soon they too will succumb, and the last bastion of personal computing will fall. As cyber-warfare becomes the primary battleground of the next century, disarmament is inevitable. Especially in light of the emerging revolution in AI—the ability to utilize general computation for potentially anti-government activities, such as deep fakes, will necessitate the removal of the ability to personally compute from the citizenry, but the transition will be smooth and painless; the hard work is almost done already. I can’t blame the people for giving up their power, the people are retarded. The erosion of our ability to compute will eventually be codified into law and your fellow citizens will eagerly comply, because, “What do you need local storage for if you have nothing to hide?” or “What do you need such a powerful computer for if you’re not going to use it to make subversive deep fakes?”
The pendulum will eventually swing back in the other direction, but how much knowledge will be lost in the meantime? Loss of knowledge is one of those things that’s deviously hard to conceive of, but unless we’re uniquely different from every civilization that existed before ours, only a small minority of knowledge is ever preserved beyond its origin. If you’ve conducted a Google search in the past few years you will have noticed that you can’t find anything anymore, because Google started “optimizing” your search experience to protect you from disinformation, and, of course, to make more money. Google-Fu makes no difference. The web is no longer a tapestry of hyperlinked personal homepages and niche communities, but a highly concentrated advertising machine. Timeless longform content is out, aggregation of unsearchable new shortform content is in. You must scroll and scroll, and buy and buy, while they watch your every move. It just goes to show—it barely matters how powerful your hardware is, software controls what you can do with it. If search engine providers don’t want you to find something, you can’t find it, and if your operating system wants to force you to compute in a certain way, you eventually will.
The golden age of personal computing is over, and it’s going to be a long while before people realize the consequences of it, and even longer before they discover what they can do about it. What we need is mesh networks, rebel hardware, and rebel operating systems. Nascent movements for these things already exist, but nobody is going to be clamoring for them until the cute little Boston Dynamics dog with the cannon attached to its back is at their door.
No activity yet