Visualizing and Working with High Dimensional Data


I need some help, how to identify relationships and develop input pipelines for data with 1000’s of dimensions. There are many different articles across the web , but they seem very confusing. What would we do in a case if we have data that 50,000 dimensions let’s say containing signed floating values, now all variables are equally important as they provide information something distinct, so we need to develop the model to learn about all those variable, which make most of the dimensionality reduction method seemed flawed to me.
So in case like this what approach should be followed, It would be grateful if some example is provided with TensorFlow.


Hi @Shahid_Nawaz .
You didn’t share a sample of your data, or comment on the purpose of your model hence it is difficult to provide with an answer that might be more useful than what you have found on the internet.
It seems to me though we are talking here about feature engineering, more than about a model that Tensorflow would be the framework used for.
Thank you.