<100 subscribers

My goal is to export data from haptic robots, put this data on a node between Linux and ROS2, and then use the ROS2 bridge. With this bridge, I can read the data of the robot in a simulation. With this loop, I can control and teleoperate the robot in real life and simulation.
Firstly, I have tried to operate IsaacSim on ROS2. It can work without ROS2, but for working with additional robots, ROS2 is important. I couldn’t succeed (I am not used to using Linux) at first. But then I remove the current IsaacSim, create the ROS2 bridge, and download the IsaacSim again. Ta-da, it worked!
Then I emailed the robot producer about whether you guys support Linux, etc. They said yes. So it is time to work with a real robot.
I started by downloading drivers for the robot, then I downloaded the SDK. Then I prepare them. Firstly, I took data from the robot and visualized the path of the robot’s sensor. Here is one of the first examples:

Then I created a node for ROS2. With this node, I bridged the robot’s data into ROS2. I need to do that because this is the prerequisite to import data into IsaacSim. I tried both Python and C. Python did not work well, so I continued with C. (Mostly vibe coded.)
Repo: https://github.com/0xemkey/Robot-to-ROS2
I am using ML in this project. But my general knowledge is not as much as any CS person. So I am learning how it works in detail. Neural networks are still, abstract term for me. So I deep-dived this last week.
I watched a video that provides a physical explanation of the AI. I want to mention that, especially because it makes me excited:
Neural network structure is not coming from brain and neurons, it is coming from magnetic field.
If you took Physics 2 classes, or maybe you can remember from high school physics, each orbital in an atom is characterized by a set of values of three quantum numbers:
n: electron’s energy,
ℓ: orbital angular momentum,
mℓ: orbital angular momentum projected along a chosen axis
ms: spin number (-1/2 or 1/2).
With these 4 numbers, we can define an exact one-electron. Inside an iron plate, there are a lot of electrons. Some of these electrons have a spin of -1/2, and some of them +1/2. If we put this plate in a magnetic field, all of them have the same spin number. And the iron plate acts as a magnet.

The question addressed by the Ising model is the problem of finding the lowest energy configuration of a system composed of interconnected “spins.” As we know from the law of conservation of entropy, systems tend to reach a state of minimum energy. In the Ising model, this is addressed through spins. Each atom’s spin affects its neighbors, so all atomic spins must be brought to the appropriate configuration.
Also, it is the forms the basis of neural networks and the Boltzmann machine
For more info: https://arxiv.org/html/2501.05394v1
The Hopfield network is one of the first models developed within artificial neural networks to store memory and restore corrupted information to its original state. The system encodes each pattern it receives (image, vector, signal) into its own weights, creating an energy landscape, and each pattern becomes an attractor within this landscape. When a corrupted or incomplete input is provided, the network reduces the energy by updating the neurons sequentially, and as a result, the input returns to the nearest attractor, i.e., the correct pattern stored in memory.
The fundamental problem Hopfield solved is associative memory, i.e., “returning from incomplete or noisy information to the correct memory/information.” In addition, because it works on the principle of energy minimization, it can also be used to solve many difficult optimization problems (NP-hard problems such as TSP, Max-Cut, SAT, routing, and scheduling).
The Hopfield network is very important in terms of artificial intelligence because it is one of the first recurrent neural network structures and forms the theoretical basis for modern RNNs, energy-based models (Boltzmann Machine, RBM, Deep Belief Networks), and even today’s Transformer attention mechanism. The Hopfield model is considered the starting point for the concept of “memory” in neural networks because it mathematically defines the idea of writing memory into synaptic connections.
For more info: https://arxiv.org/abs/2008.02217
The attention model is a mechanism that enables an artificial neural network to focus more on important information within the input. The basic idea is to calculate the relationship between each piece of information and other pieces of information and to determine which part is more “relevant” in a weighted manner.
The model generates three representations for each word or vector:
Query, Key, and Value.
Query calculates similarity with all other Keys, similarities are normalized with softmax, and these scores create new information by weighting the Values. Thus, the network gives less weight to unnecessary information and more weight to important information.
Query: “What am I looking for?”
Key: “What information do I have?”
Value: “What is my actual content?”
A dot product is performed between Query and Key → similarity is found.
It is normalized with softmax → weight scores are obtained.
Values are summed with these scores → each word is re-represented according to the context.
The context is recalculated using these.
For more info: https://arxiv.org/abs/1706.03762
If we create a schematic to visualize layers and a neural networks:

So, to understand how neurons work, we need to learn the meaning and equations of the weights, bias, activation function, and some other statistical terms like loss, MSE, etc.
We can assume the learning process is curve fitting. Basically, the whole neural network tries to fit a curve and make some predictions due to neural network connections (weights), and these weights are updated with a feedback mechanism.

In this graph, the black line represents the target model, and the red one is the prediction from the model.
We can assume this prediction process is like

Due to this schematic,

where W’s are weights, x and y are inputs, and σ is the activation function.
If we calculate the Loss Function,

as mentioned before, y is the prediction, and t is the target.
We can calculate the effect of each variable on the loss function with partial derivative:
For W_2:

So,

For W_1:

where,

The Loss Function (a.k.a mean squared error) is the measure of the difference between the target and predicted values. We can calculate it with:

We can also calculate the effect of the weight on the loss function with partial derivative:

To update the weight:

Chain Rule and Gradient Descent:

That is all for last week (14-21 November 2025).
Next week, I will continue with the activation function and its types. Additionally, I couldn’t provide code snippets, but I will next week.
My goals for next week are
Create a teleoperation system
Create basic neurons and a neural network
Remember some of the reward functions in ML
Amazing
Thanks @paragraph ! You can read the first 2 weeks' blog here: https://paragraph.com/@0xf8d928467d5531d70afed163bd9ba8f263bdc5f0/week-1 https://paragraph.com/@0xf8d928467d5531d70afed163bd9ba8f263bdc5f0/week-2 More to come, enjoy reading!