Straight to the question is “How can I apply input data with
ndim=1 to LSTM network?”
I have 12 input data for LSTM network with two layers of 128 units each, and I’m trying to use it for PPO TF-Agent. I defined the observation_spec of the environment as
self._observation_spec = array_spec.BoundedArraySpec(shape=(12,), dtype=np.float32, name='observation')
and networks as
def create_networks(tf_env): actor_net = ActorDistributionRnnNetwork( tf_env.observation_spec(), tf_env.action_spec(), input_fc_layer_params=None, lstm_size=(128,128), output_fc_layer_params=None, activation_fn=None) value_net = ValueRnnNetwork( tf_env.observation_spec(), input_fc_layer_params=None, lstm_size=(128,128), output_fc_layer_params=None, activation_fn=None)
But I’m facing an error saying
ValueError: Input 0 of layer bias_layer is incompatible with the layer: : expected min_ndim=2, found ndim=1. Full shape received: 
I see that the observation_spec I defined has only one array dimension [vel_y, vel_z, right_hip_theta, …]
But I can’t think of any other way to define the observation_spec with more than two array dimensions.
How can I apply input data with
ndim=1 to LSTM network?
Do I need to replicate the data like
[ [vel_y, vel_z, right_hip_theta, … ] ,
[vel_y, vel_z, right_hip_theta, … ] , ]
Or am I missing something else?
I’ve been struggling with this for more than two weeks now, and I hope I could make some progress…