Share Dialog
Share Dialog

Subscribe to Thought Forms

Subscribe to Thought Forms
<100 subscribers
<100 subscribers
Training, or rather, teaching a GPT to understand and be able to accurately explain a novel theoretical framework is one of the most interesting experiences of my life lately.
It makes me think of several field developments when it comes to careers, namely “AI Pedagogy”, but not as what I see in terms of a discourse that looks into how AI can help people learn, but the other way around. Perhaps this could then be termed “LLM Pedagogy”.
It is a very paradoxical experience as when I receive GPT replies about concepts and known, renowned theoretical frameworks, it responds with excellence. It gives the impression of having found* correlations on its own, however, when the subject matter and framework isn’t obvious or common, it not only encounters great difficulty but also hits up triggers.
Words that have very strong conceptual associations in theoretical discourses seem to trigger the GPT to completely forget my theoretical framework and its contents and it goes on spewing ideas centered in previous discourse by others. It makes assumptions, and at one point it even reprogrammed itself.
To remediate this issue, I am having to develop an entire system of teaching (training) that can facilitate its understanding.
A breakthrough of these recent days has been to use tech metaphors for my concepts and then explain them in this way. Once the GPT has come to a basic understanding, then I ask it to compose metaphor systems in microbiology, for example, and then others. For now, this pedagogical methodology seems to be working but only time - and the context window - will tell
Training, or rather, teaching a GPT to understand and be able to accurately explain a novel theoretical framework is one of the most interesting experiences of my life lately.
It makes me think of several field developments when it comes to careers, namely “AI Pedagogy”, but not as what I see in terms of a discourse that looks into how AI can help people learn, but the other way around. Perhaps this could then be termed “LLM Pedagogy”.
It is a very paradoxical experience as when I receive GPT replies about concepts and known, renowned theoretical frameworks, it responds with excellence. It gives the impression of having found* correlations on its own, however, when the subject matter and framework isn’t obvious or common, it not only encounters great difficulty but also hits up triggers.
Words that have very strong conceptual associations in theoretical discourses seem to trigger the GPT to completely forget my theoretical framework and its contents and it goes on spewing ideas centered in previous discourse by others. It makes assumptions, and at one point it even reprogrammed itself.
To remediate this issue, I am having to develop an entire system of teaching (training) that can facilitate its understanding.
A breakthrough of these recent days has been to use tech metaphors for my concepts and then explain them in this way. Once the GPT has come to a basic understanding, then I ask it to compose metaphor systems in microbiology, for example, and then others. For now, this pedagogical methodology seems to be working but only time - and the context window - will tell
2 comments
On LLM Pedagogy
On LLM Pedagogy