<100 subscribers


This week I gave my weight to the simulation part. Because without simulation I can not access any synthetic data and due to that no ML applications. Here is what I have done and what I am working on right now:
I can write data to ROS2 with robot.
I can write data to ROS2 with simulation (using IsaacSim).
I am trying to connect the robot and simulation logic to use the robot as a controller in a simulation. This simulation must be 6 DoF (Degree of Freedom) robot and then I will map the angle and forces from real robot to simulation one.
What are the current missing points?
First of all, I can not start simulation because of the some Python libraries are not uploaded. I was too late to aware that because I did not read all of the error messages. Some of these messages are between the succesful parts so I did not scroll up and check the other details. (MY FAULT!)
Anyway, I have created a simulation with carter robot. It was default mission for the IsaacSim’s ROS2 library. Then I gave some initial angular velocity for its tires and initial locations. Then I moved the pencil on the robot and wrote something. (I tried to write Yiğit because tomorrow his birthday.) And took the pitch, roll and yaw angles from the robot (for more details you can check last week’s blog here) and plot them. You can see the graphs for X axis and yaw angles and also my writing trial.

Is it finished? Of course not, now I am still trying to create the robot simulation then I hope I will map the angles and I will be ready for extracting data from the trials. Then I will use these data for machine learning trials.
I have completed my playlist and of course I started a new one.
Last week I have studied about normalization for softmax algorithm and finally loss calculation and backpropagation. And as a final topic accuracy of the model. Let’s continue with these topics:
Why we need normalization?
Normalization in Machine Learning usually refers to re-scaling input features so they are on a similar numerical range, for example making them have mean 0 and standard deviation 1, or squeezing them into [0, 1]. We do this because different features can have very different scales (like height in meters vs salary in dollars), and if we don’t normalize, the large-scale features dominate the gradients and make optimization harder. With normalization, the loss surface becomes better conditioned, so gradient-based algorithms (like those used in neural networks, logistic regression, SVMs, etc.) train more stably and usually converge faster.
Normalization:

Why we need Softmax?
The model first produces raw scores for each class (called logits), and softmax turns these scores into probabilities by exponentiating each score and dividing by the sum of all exponentials. This guarantees that each output is between 0 and 1 and that all outputs sum to 1, so they can be interpreted as class probabilities. Using softmax outputs lets us apply cross-entropy loss, which works very well with gradient descent and encourages the model to assign high probability to the correct class while lowering the probabilities of the others.
Softmax Formula:

Python Code Sample:

Overflow Prevention:

Reminder, Shannon Entropy Equation:

Calculating Loss with Categorical Cross-Entropy:


where k is the target label index of correct class probability.
Python Code Sample:

Example:
Our target is [1, 0, 0], and
Prediction is [0.7, 0.1, 0.2] . Then our Loss will be:


Accuracy:
Is the proportion of all classifications that were correct, whether positive or negative. It is mathematically defined as:
In the spam classification example, accuracy measures the fraction of all emails correctly classified.

Thats all for this one, see you next week.
This week I gave my weight to the simulation part. Because without simulation I can not access any synthetic data and due to that no ML applications. Here is what I have done and what I am working on right now:
I can write data to ROS2 with robot.
I can write data to ROS2 with simulation (using IsaacSim).
I am trying to connect the robot and simulation logic to use the robot as a controller in a simulation. This simulation must be 6 DoF (Degree of Freedom) robot and then I will map the angle and forces from real robot to simulation one.
What are the current missing points?
First of all, I can not start simulation because of the some Python libraries are not uploaded. I was too late to aware that because I did not read all of the error messages. Some of these messages are between the succesful parts so I did not scroll up and check the other details. (MY FAULT!)
Anyway, I have created a simulation with carter robot. It was default mission for the IsaacSim’s ROS2 library. Then I gave some initial angular velocity for its tires and initial locations. Then I moved the pencil on the robot and wrote something. (I tried to write Yiğit because tomorrow his birthday.) And took the pitch, roll and yaw angles from the robot (for more details you can check last week’s blog here) and plot them. You can see the graphs for X axis and yaw angles and also my writing trial.

Is it finished? Of course not, now I am still trying to create the robot simulation then I hope I will map the angles and I will be ready for extracting data from the trials. Then I will use these data for machine learning trials.
I have completed my playlist and of course I started a new one.
Last week I have studied about normalization for softmax algorithm and finally loss calculation and backpropagation. And as a final topic accuracy of the model. Let’s continue with these topics:
Why we need normalization?
Normalization in Machine Learning usually refers to re-scaling input features so they are on a similar numerical range, for example making them have mean 0 and standard deviation 1, or squeezing them into [0, 1]. We do this because different features can have very different scales (like height in meters vs salary in dollars), and if we don’t normalize, the large-scale features dominate the gradients and make optimization harder. With normalization, the loss surface becomes better conditioned, so gradient-based algorithms (like those used in neural networks, logistic regression, SVMs, etc.) train more stably and usually converge faster.
Normalization:

Why we need Softmax?
The model first produces raw scores for each class (called logits), and softmax turns these scores into probabilities by exponentiating each score and dividing by the sum of all exponentials. This guarantees that each output is between 0 and 1 and that all outputs sum to 1, so they can be interpreted as class probabilities. Using softmax outputs lets us apply cross-entropy loss, which works very well with gradient descent and encourages the model to assign high probability to the correct class while lowering the probabilities of the others.
Softmax Formula:

Python Code Sample:

Overflow Prevention:

Reminder, Shannon Entropy Equation:

Calculating Loss with Categorical Cross-Entropy:


where k is the target label index of correct class probability.
Python Code Sample:

Example:
Our target is [1, 0, 0], and
Prediction is [0.7, 0.1, 0.2] . Then our Loss will be:


Accuracy:
Is the proportion of all classifications that were correct, whether positive or negative. It is mathematically defined as:
In the spam classification example, accuracy measures the fraction of all emails correctly classified.

Thats all for this one, see you next week.
Share Dialog
Share Dialog
No comments yet