Type-safe env variables with Typescript
Almost all our projects have environmental variables. We use them to configure various aspects of our systems. What if they’re missing? What if they’re misconfigured? This is typically a challenge when working with TypeScript, which immediately warns you about missing properties as soon as you start accessing your variables from process.env. A quick fix would be to declare types for process.env and call it a day. But can we do better? Of course! 🎉 And the solution is not even TypeScript-spec...
Postgres database functions are your next favorite feature!
Imagine you’re building a system where users can perform transactions such as selling and buying tokens. For each token, every user has a dedicated wallet. Our database consists of four entities: User, Wallet, Token, and Transaction.If you're using Prisma, you can generate this with \`prisma-dbml-generator\`The transaction has a field delta that is either a positive or negative number. When positive, that is the number of tokens purchased. When negative, it is how many tokens were sold. ...
Type-safe env variables with Typescript
Almost all our projects have environmental variables. We use them to configure various aspects of our systems. What if they’re missing? What if they’re misconfigured? This is typically a challenge when working with TypeScript, which immediately warns you about missing properties as soon as you start accessing your variables from process.env. A quick fix would be to declare types for process.env and call it a day. But can we do better? Of course! 🎉 And the solution is not even TypeScript-spec...
Postgres database functions are your next favorite feature!
Imagine you’re building a system where users can perform transactions such as selling and buying tokens. For each token, every user has a dedicated wallet. Our database consists of four entities: User, Wallet, Token, and Transaction.If you're using Prisma, you can generate this with \`prisma-dbml-generator\`The transaction has a field delta that is either a positive or negative number. When positive, that is the number of tokens purchased. When negative, it is how many tokens were sold. ...
Share Dialog
Share Dialog

Subscribe to Mike Grabowski

Subscribe to Mike Grabowski
<100 subscribers
<100 subscribers
The inspiration for this article came to me a recently while I was watching a great keynote by Prof. Yann LeCun.
I strongly recommend dedicating an hour to watch it.
Here, I am going to focus on one specific concept: the autoregressive nature of LLMs and its real-world implications.
This talk helped me understand one of the persistent issues I run into while working with Claude.
At a high level, Claude, when presented with a complex task and a choice to return either a valid answer or an error, would always return an answer - sometimes valid, but most often, hallucinate with an invalid response.
Let's use an example for clarity. Imagine you were building a code generator that, given a set of functions and a user request, would either return valid code that, when executed, meets the user's request with given functions or return an empty function.
A prompt might look like this:
Here is the task: <task> (...) </task>
In order to complete the task, you must do the following:
1. Analyse the task carefully and break it down into separate steps.
2. For each step, you must assign a function that can fully satisfy that step.
3. Generate a valid TypeScript code that completes the entire task using provided API endpoints.
If you cannot complete the task due to missing or incomplete endpoint, you must return an empty function.
While the details are simplified for brevity, the core message is clear. Claude needs to carry out a systematic analysis, and based on this analysis, determine the output.
The problem was that Claude would always return a code snippet, regardless of whether the necessary API endpoints were available or not.

Autoregressive LLMs, like GPT, are fundamentally designed to predict the next word in a sequence based on the preceding context.
However, their ability to "plan forward" in the same way humans can intentionally craft a narrative or strategize is limited. They don't inherently have foresight or understanding of future implications.
That said, if provided with a prompt that suggests a certain direction or objective, these models can generate text that appears coherent and in-line with that direction because they've been trained on vast amounts of text and have learned patterns, styles, and typical narrative structures.
But this should not be mistaken for genuine forward planning or intentionality. The model is still reacting word-by-word based on its learned patterns and doesn't truly "understand" or "plan" in the human sense.
Thank you ChatGPT for this explanation 😆
To put it into perspective once again, when Claude would’ve realised it was missing endpoints to complete the user task, the decision to output the code was already made. Think of it as streaming response, where tokens were already sent out to the client.
The fix turned out to be really simple, but not that obvious at the first time.
Instead of giving Claude a directive about the kind of response it should produce:
If you cannot complete the task due to missing or incomplete endpoint, you must return an empty function.
I adjusted the prompt to let Claude reflect and comment on its own output:
If you could not complete the task due to missing or incomplete endpoint, you must include <error> at the end of your response with a message explaining what was missing.
And that did the trick!
If tag would be present in the response, it would indicate the code is incomplete and cannot be further evaluated.
Claude works really well with XML tags, so I often use them as a means of transporting parameters and responses. Will cover that in one of the upcoming blog posts, so press that button below!
I encourage you to experiment with how you prompt your LLM and verify whether it is providing factual answers or simply satisfying your request (and hallucinating at the same time). Asking it to self-reflect instead of giving direction may surprise you - in a positive way!
The inspiration for this article came to me a recently while I was watching a great keynote by Prof. Yann LeCun.
I strongly recommend dedicating an hour to watch it.
Here, I am going to focus on one specific concept: the autoregressive nature of LLMs and its real-world implications.
This talk helped me understand one of the persistent issues I run into while working with Claude.
At a high level, Claude, when presented with a complex task and a choice to return either a valid answer or an error, would always return an answer - sometimes valid, but most often, hallucinate with an invalid response.
Let's use an example for clarity. Imagine you were building a code generator that, given a set of functions and a user request, would either return valid code that, when executed, meets the user's request with given functions or return an empty function.
A prompt might look like this:
Here is the task: <task> (...) </task>
In order to complete the task, you must do the following:
1. Analyse the task carefully and break it down into separate steps.
2. For each step, you must assign a function that can fully satisfy that step.
3. Generate a valid TypeScript code that completes the entire task using provided API endpoints.
If you cannot complete the task due to missing or incomplete endpoint, you must return an empty function.
While the details are simplified for brevity, the core message is clear. Claude needs to carry out a systematic analysis, and based on this analysis, determine the output.
The problem was that Claude would always return a code snippet, regardless of whether the necessary API endpoints were available or not.

Autoregressive LLMs, like GPT, are fundamentally designed to predict the next word in a sequence based on the preceding context.
However, their ability to "plan forward" in the same way humans can intentionally craft a narrative or strategize is limited. They don't inherently have foresight or understanding of future implications.
That said, if provided with a prompt that suggests a certain direction or objective, these models can generate text that appears coherent and in-line with that direction because they've been trained on vast amounts of text and have learned patterns, styles, and typical narrative structures.
But this should not be mistaken for genuine forward planning or intentionality. The model is still reacting word-by-word based on its learned patterns and doesn't truly "understand" or "plan" in the human sense.
Thank you ChatGPT for this explanation 😆
To put it into perspective once again, when Claude would’ve realised it was missing endpoints to complete the user task, the decision to output the code was already made. Think of it as streaming response, where tokens were already sent out to the client.
The fix turned out to be really simple, but not that obvious at the first time.
Instead of giving Claude a directive about the kind of response it should produce:
If you cannot complete the task due to missing or incomplete endpoint, you must return an empty function.
I adjusted the prompt to let Claude reflect and comment on its own output:
If you could not complete the task due to missing or incomplete endpoint, you must include <error> at the end of your response with a message explaining what was missing.
And that did the trick!
If tag would be present in the response, it would indicate the code is incomplete and cannot be further evaluated.
Claude works really well with XML tags, so I often use them as a means of transporting parameters and responses. Will cover that in one of the upcoming blog posts, so press that button below!
I encourage you to experiment with how you prompt your LLM and verify whether it is providing factual answers or simply satisfying your request (and hallucinating at the same time). Asking it to self-reflect instead of giving direction may surprise you - in a positive way!
No activity yet