I didn't solve this error i try this over 2 months but didn't any better way

2024-03-28 09:48:28.083452: I tensorflow/core/util/port.cc:113] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable TF_ENABLE_ONEDNN_OPTS=0. 2024-03-28 09:48:29.889006: I tensorflow/core/util/port.cc:113] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable TF_ENABLE_ONEDNN_OPTS=0. Traceback (most recent call last): File “[D:\EXAMGUARDD\EXAM\Tensorflow\models\research\object_detection\builders\model_builder_tf2_test.py”, line 24](file:///D:/EXAMGUARDD/EXAM/Tensorflow/models/research/object_detection/builders/model_builder_tf2_test.py#line=23), in from object_detection.builders import model_builder File “[D:\EXAMGUARDD\EXAM\cheating\Lib\site-packages\object_detection-0.1-py3.12.egg\object_detection\builders\model_builder.py”, line 26](file:///D:/EXAMGUARDD/EXAM/cheating/Lib/site-packages/object_detection-0.1-py3.12.egg/object_detection/builders/model_builder.py#line=25), in from object_detection.builders import hyperparams_builder File “[D:\EXAMGUARDD\EXAM\cheating\Lib\site-packages\object_detection-0.1-py3.12.egg\object_detection\builders\hyperparams_builder.py”, line 27](file:///D:/EXAMGUARDD/EXAM/cheating/Lib/site-packages/object_detection-0.1-py3.12.egg/object_detection/builders/hyperparams_builder.py#line=26), in from object_detection.core import freezable_sync_batch_norm File “[D:\EXAMGUARDD\EXAM\cheating\Lib\site-packages\object_detection-0.1-py3.12.egg\object_detection\core\freezable_sync_batch_norm.py”, line 20](file:///D:/EXAMGUARDD/EXAM/cheating/Lib/site-packages/object_detection-0.1-py3.12.egg/object_detection/core/freezable_sync_batch_norm.py#line=19), in class FreezableSyncBatchNorm(tf.keras.layers.experimental.SyncBatchNormalization ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ AttributeError: module ‘keras._tf_keras.keras.layers’ has no attribute ‘experimental’

Hi @Hamna_Sattar, The tf.keras.layers.experimenta was deprecated. Instead of using tf.keras.layers.experimental.SyncBatchNormalization you can use tf.keras.layers.BatchNormalization and pass True value to the synchronized argument present in the BatchNormalization layer. If True value is passed it synchronizes the global batch statistics (mean and variance) for the layer across all devices at each training step in a distributed training strategy. Thank You.