Eddie's Learing Record 44
1. DurationMonday, Apr. 24th, 2023 - Sunday, Apr. 30th, 20232. Learning Records2.1. UpdatingRevised my English Curriculum Vitae and Chinese Resume bas...
Eddie's Learning Record 23
1. Duration Monday, November 7th, 2022 - Saturday, November 12th, 2022 2. Learning Record 2.1 Give a Pitch to My Junior Fellow I supposed him to be a professor of deep learning, but he turned out to be a rookie. He even didn't finish the machine learning course. Luckily, he is interested in t...
Eddie's Learning Record 40
Eddie's Learing Record 44
1. DurationMonday, Apr. 24th, 2023 - Sunday, Apr. 30th, 20232. Learning Records2.1. UpdatingRevised my English Curriculum Vitae and Chinese Resume bas...
Eddie's Learning Record 23
1. Duration Monday, November 7th, 2022 - Saturday, November 12th, 2022 2. Learning Record 2.1 Give a Pitch to My Junior Fellow I supposed him to be a professor of deep learning, but he turned out to be a rookie. He even didn't finish the machine learning course. Luckily, he is interested in t...
Eddie's Learning Record 40
Subscribe to Eddie's Learning Records
Subscribe to Eddie's Learning Records
Share Dialog
Share Dialog
<100 subscribers
<100 subscribers
Monday, November 21st, 2022 - Saturday, November 26th, 2022
When images have only two classes which means the labels are "0" and "1", the `label_mode` of `tf.keras.utils.image_dataset_from_directory` should be `binary` and the loss_fn should be `BinaryCrossentropy` instead of `SparseCategoricalCrossentropy`. Otherwise, the accuracy will jitter around 50%, which means the model learns nothing.
I watched the paper [1] and read the code.
It took me a few days to understand the `Window-based Self-Attention & Shifted Window-based Self-Attention` and the `Swin Transformer Block`.
I built a py file to store all functions for loading the datasets. Consequently, the code could work only by changing the basee_dir of the dataset.
I was glad that the models ran well.
The models seemed to generate relatively good results on some small datasets. But they didn't work that well on the micro-expression dataset.
Besides ViT, SL-ViT, Swin Transformer, I also found that there are lots of other types of transformer models. It seems impossible to learn all of them.
Monday, November 21st, 2022 - Saturday, November 26th, 2022
When images have only two classes which means the labels are "0" and "1", the `label_mode` of `tf.keras.utils.image_dataset_from_directory` should be `binary` and the loss_fn should be `BinaryCrossentropy` instead of `SparseCategoricalCrossentropy`. Otherwise, the accuracy will jitter around 50%, which means the model learns nothing.
I watched the paper [1] and read the code.
It took me a few days to understand the `Window-based Self-Attention & Shifted Window-based Self-Attention` and the `Swin Transformer Block`.
I built a py file to store all functions for loading the datasets. Consequently, the code could work only by changing the basee_dir of the dataset.
I was glad that the models ran well.
The models seemed to generate relatively good results on some small datasets. But they didn't work that well on the micro-expression dataset.
Besides ViT, SL-ViT, Swin Transformer, I also found that there are lots of other types of transformer models. It seems impossible to learn all of them.
Eddie He
Eddie He
No activity yet