
Cross-compiling C++ with Bazel and Wolfi
rules_apko, toolchains_llvm and Wolfi

Left of Launch: Questioning the Speed-Quality Tradeoff in Software Engineering
The concept “left of launch” comes from missile defense: in the timeline, the events that happen before the missile launch are to the left; this is where the idea of Shift Left in QA testing comes from as well. You can apply this idea to building code, unit testing, benchmarking and even security scanning: the common thread is that in all these things when there is a mistake of some kind, you want it to become apparent as soon after the mistake is made as possible. Why you might want that, an...

Pre-Building Standard Devcontainers with GitHub CI
To support Left-of-Launch quality checks and ensure a consistent development environment straight out of GitHub, I make extensive use of VS Code’s devcontainers. The one problem is once you have built up a large set of tools, the time to rebuild the container gets to be quite long -- unnecessarily so, because the vast majority of the layers in the container never change. Ideally we want the majority of our core features to just be available as a pre-built image. If you search for how to do th...

Google’s mission “to organize the world’s information and make it universally accessible and useful” was in part inspired by the vision of Star Trek’s computer. Generationally, for Millennials and Generation X, there is a more recent point of reference with Jarvis and Tony Stark. Science fiction has always had this role in society: it’s a Janus that faces both the present and the future, incorporating our anxieties and expectations and projecting forward. This dual nature means, of course, that the fiction makes its way into present dreams, and influences the course of progress.
Part of the concept for Cloudwall -- a clean-slate risk technology firm for digital asset traders that I started in the fall of 2021 -- in part came out of a small plot element in William Gibson’s: The Peripheral, the idea of a “house quants” operating on present markets at the behest of their past-bending oligarch, Lev Zuvbov. Technology sufficiently advanced is not just indistinguishable from magic, as Arthur C. Clarke had it; it’s also pervasive -- not necessarily a commodity, but in hands that previously might only have accessed it at multiple removes through their investment in a quantitative hedge fund arranged by their private banker.
The science fiction came first, but it was followed by a very real company that was half technologists and half quantitative researchers looking to make technology and models from Wall Street available in a new market.
This brings us to OpenAI’s addition of memory to its models over a year ago, which just got a significant upgrade. Privacy is important to me, and I am certainly alive to the risks of models having access to an increasingly detailed picture of my life -- I’d prefer to keep control and confidentiality to the greatest degree possible. But there’s another way to think about building up memory, and that is as a kind of investment. It’s not ideal that Sam Altman has my investment in his cloud storage, I wish it were otherwise, but I am also still OK with banks holding my money despite my long-time interest in Bitcoin and cryptocurrency markets, so like all of us in our interactions with centralized, commercially-driven services, I consider the trade-offs inherent in handing over knowledge of my life, and the power granted by doing so.
A truly capable AI assistant is not going to be something you can buy. What will be on offer, commercially, will be agents that have general knowledge, access to tools like Web search, and more and more powerful ability to reason from those foundations. What they won’t have is knowledge about you, though they will have the ability to interact and learn from those interactions. Tony Stark had interests (weapons engineering) which you might not share; even if you were given Jarvis, what are the chances he would be an effective assistant for you? This means the memory that is in the model’s context is going to have to be built up. You can start now or you can start in two years, and it will get easier and maybe safer in time, but it has to be done.
This is going to involve a mutual teaching that after more than 20 years of passive use of Web technology is going to feel strange: we are used to asking and getting answers, something we continue to do far too often with AI, as if it’s just a better Google. This, in my mind, is a missed opportunity. We should not just be using AI; we should also be teaching, and that includes teaching the machine what we want to learn, and how. It’s going to take work, and a reckoning with those very real trade-offs of privacy. But personally I think it’s sufficiently important that it should start now.

Google’s mission “to organize the world’s information and make it universally accessible and useful” was in part inspired by the vision of Star Trek’s computer. Generationally, for Millennials and Generation X, there is a more recent point of reference with Jarvis and Tony Stark. Science fiction has always had this role in society: it’s a Janus that faces both the present and the future, incorporating our anxieties and expectations and projecting forward. This dual nature means, of course, that the fiction makes its way into present dreams, and influences the course of progress.
Part of the concept for Cloudwall -- a clean-slate risk technology firm for digital asset traders that I started in the fall of 2021 -- in part came out of a small plot element in William Gibson’s: The Peripheral, the idea of a “house quants” operating on present markets at the behest of their past-bending oligarch, Lev Zuvbov. Technology sufficiently advanced is not just indistinguishable from magic, as Arthur C. Clarke had it; it’s also pervasive -- not necessarily a commodity, but in hands that previously might only have accessed it at multiple removes through their investment in a quantitative hedge fund arranged by their private banker.
The science fiction came first, but it was followed by a very real company that was half technologists and half quantitative researchers looking to make technology and models from Wall Street available in a new market.
This brings us to OpenAI’s addition of memory to its models over a year ago, which just got a significant upgrade. Privacy is important to me, and I am certainly alive to the risks of models having access to an increasingly detailed picture of my life -- I’d prefer to keep control and confidentiality to the greatest degree possible. But there’s another way to think about building up memory, and that is as a kind of investment. It’s not ideal that Sam Altman has my investment in his cloud storage, I wish it were otherwise, but I am also still OK with banks holding my money despite my long-time interest in Bitcoin and cryptocurrency markets, so like all of us in our interactions with centralized, commercially-driven services, I consider the trade-offs inherent in handing over knowledge of my life, and the power granted by doing so.
A truly capable AI assistant is not going to be something you can buy. What will be on offer, commercially, will be agents that have general knowledge, access to tools like Web search, and more and more powerful ability to reason from those foundations. What they won’t have is knowledge about you, though they will have the ability to interact and learn from those interactions. Tony Stark had interests (weapons engineering) which you might not share; even if you were given Jarvis, what are the chances he would be an effective assistant for you? This means the memory that is in the model’s context is going to have to be built up. You can start now or you can start in two years, and it will get easier and maybe safer in time, but it has to be done.
This is going to involve a mutual teaching that after more than 20 years of passive use of Web technology is going to feel strange: we are used to asking and getting answers, something we continue to do far too often with AI, as if it’s just a better Google. This, in my mind, is a missed opportunity. We should not just be using AI; we should also be teaching, and that includes teaching the machine what we want to learn, and how. It’s going to take work, and a reckoning with those very real trade-offs of privacy. But personally I think it’s sufficiently important that it should start now.

Subscribe to Kyle Downey

Subscribe to Kyle Downey

Cross-compiling C++ with Bazel and Wolfi
rules_apko, toolchains_llvm and Wolfi

Left of Launch: Questioning the Speed-Quality Tradeoff in Software Engineering
The concept “left of launch” comes from missile defense: in the timeline, the events that happen before the missile launch are to the left; this is where the idea of Shift Left in QA testing comes from as well. You can apply this idea to building code, unit testing, benchmarking and even security scanning: the common thread is that in all these things when there is a mistake of some kind, you want it to become apparent as soon after the mistake is made as possible. Why you might want that, an...

Pre-Building Standard Devcontainers with GitHub CI
To support Left-of-Launch quality checks and ensure a consistent development environment straight out of GitHub, I make extensive use of VS Code’s devcontainers. The one problem is once you have built up a large set of tools, the time to rebuild the container gets to be quite long -- unnecessarily so, because the vast majority of the layers in the container never change. Ideally we want the majority of our core features to just be available as a pre-built image. If you search for how to do th...
Share Dialog
Share Dialog
<100 subscribers
<100 subscribers
No activity yet