Note: this post is a result of thinking about how traditional finance and tech analogies apply to the realm of blockchains, and realizing most of them are not great analogies despite the widespread usage of such "analogies."
Humans can acquire knowledge without a well-understood explanation.
Ignaz Semmelweis, a Hungarian obstetrician working at the Vienna General Hospital in 1847, noticed the dramatically high maternal mortality from puerperal fever following births assisted by doctors and medical students. However, those attended by midwives were relatively safe. Investigating further, Semmelweis made the connection between puerperal fever and examinations of delivering women by doctors, and further realized that these physicians had usually come directly from autopsies. Asserting that puerperal fever was a contagious disease and that matter from autopsies were implicated in its development, Semmelweis made doctors wash their hands with chlorinated lime water before examining pregnant women. He then documented a sudden reduction in the mortality rate from 18% to 2.2% over a period of a year.
Ignaz Semmelweis did not understand germ theory before making doctors wash their hands with chlorinated lime water. Instead, our understanding of germ theory came because Semmelweis made doctors wash their hands. Our knowledge that washing hands led to better patient outcomes preceded a good explanation of that result, which eventually became the germ theory of disease.
Thanks for reading Maksymâs Newsletter! Subscribe for free to receive new posts and support my work.
In What Is ChatGPT Doing ⊠and Why Does It Work?, Stephen Wolfram argues that ChatGPT might contain knowledge of how human language works without us understanding the workings of our language.
In other words, the reason a neural net can be successful in writing an essay is because writing an essay turns out to be a âcomputationally shallowerâ problem than we thought. And in a sense this takes us closer to âhaving a theoryâ of how we humans manage to do things like writing essays, or in general deal with language.
So how is it, then, that something like ChatGPT can get as far as it does with language? The basic answer, I think, is that language is at a fundamental level somehow simpler than it seems. And this means that ChatGPTâeven with its ultimately straightforward neural net structureâis successfully able to âcapture the essenceâ of human language and the thinking behind it. And moreover, in its training, ChatGPT has somehow âimplicitly discoveredâ whatever regularities in language (and thinking) make this possible.
The success of ChatGPT is, I think, giving us evidence of a fundamental and important piece of science: itâs suggesting that we can expect there to be major new âlaws of languageââand effectively âlaws of thoughtââout there to discover. In ChatGPTâbuilt as it is as a neural netâthose laws are at best implicit. But if we could somehow make the laws explicit, thereâs the potential to do the kinds of things ChatGPT does in vastly more direct, efficientâand transparentâways.
As of now, weâre not ready to âempirically decodeâ from its âinternal behaviorâ what ChatGPT has âdiscoveredâ about how human language is âput togetherâ.
My strong suspicion is that the success of ChatGPT implicitly reveals an important âscientificâ fact: that thereâs actually a lot more structure and simplicity to meaningful human language than we ever knewâand that in the end there may be even fairly simple rules that describe how such language can be put together.
Itâs amazing how human-like [ChatGPTâs] results are. And as Iâve discussed, this suggests something thatâs at least scientifically very important: that human language (and the patterns of thinking behind it) are somehow simpler and more âlaw likeâ in their structure than we thought. ChatGPT has implicitly discovered it. But we can potentially explicitly expose it, with semantic grammar, computational language, etc.
Itâs easy to dismiss ChatGPT because itâs just generating a ââreasonable continuationâ of whatever text itâs got so far.â But we should seriously consider that ChatGPT might have uncovered our brainâs methods of using our learnings to generate text. In other words, ChatGPT might embody knowledge of human brains without us understanding how our brains work. The understanding comes after knowledge.
Thanks for reading Maksymâs Newsletter! Subscribe for free to receive new posts and support my work.