I have installed TensorFlow-metal in my MacBook Pro (38 GPU/32GB ram), and the time speed is acceptable when I run my experiment (transformer model using only encoder block):
- Batches: 4
- Data X training: (32, 1024. 512) EEG data of 1 subject
- Epochs: 200
- Time? It takes around 40 minutes to finish
But when I run my experiment:
- Batches: 72
- Data X training: (1024, 1024, 512) EEG data of 32 subjects
- Epochs: 1000
- Time? After 6 hrs, it is still on epoch 1!
Is it normal?
Tensorflow-metal plugging is not taking the 38 GPU of the computer?