Run python tests without compiling TF

Is there a way to run bazel test in a git checkout without compiling TensoFflow?
E.g. just using an installed TensorFlow wheel with pip install tensorflow.


Copy test file out of tensorflow folder and run it.


But it is very inconvenient every time to copy back and forth single test files or the whole module.
Also it is inconvenient to change the related feature code directly from the target directory where the wheels is nstalled.

I was looking for a way to do everything python related in the source directory using the c++ part (the .so libraries) installed by the wheel (without in source compiling and packaging the whole TF).

1 Like

We are discussing a PR at Run pytest targets without compiling by bhack · Pull Request #50163 · tensorflow/tensorflow · GitHub /cc @angerson @mihaimaruseac @perfinion


We are doing this for nightly builds: build a pip package, install it and then run TF tests against the package.

See tensorflow/tools/ci_build/rel/ubuntu/ and;l=483;drc=2a21421f01df1f3cc43f2cff42f62afec24247dd

But we need something more.
When you are working on a python PR we need also to edit python source file in the checkout dir not only python tests.

Instead I think when run these tests we are still using python files installed from the wheel.

1 Like

Oh, definitely.

A thing I saw helped was to copy the edited python files to their equivalent in .../site-packages/tensorflow/... (or lib-packages, depending on sandboxing, if present) to fake an updated wheel

Yes is the hack the I use all the days. But if I need to copy file forth and back every time it is quite useless.

Can we just use only the so installed from the wheel?

1 Like

I don’t think that works, sadly :frowning:

Yes it doesn’t work.
That’s why we have this thread. :wink:
Any hint on how we could achieve this is very appreciated, I will expand the PR.

1 Like

We’ll probably need to eliminate big shared objects and the API generation step, these seem to be the bottlenecks and requiring to build these for every test is what causes most of TF to build for just one test.

This is a huge effort though, don’t know if we can put a timeline on it.

I suppose that we need only a bazel build option to build without the so target or not?
Then we could find a workaround to load the so from the wheels.

1 Like

There is a --build_tests_only option but I am unsure how it works with regards to the .so dependencies

Is what we have in tensorflow/tools/ci_build/builds/ so I don’t think that we could solve with this.

1 Like

I think that as a first step we could technically evaluate what we need to do at dependecies level in bazel to compile this target with a new option:

bazel build //tensorflow/python:all

without triggering TF c++ targets compilation.

1 Like

I’ve closed the PR as the current design doesn’t let us to separate python and c++ targets.

1 Like

I was thinking we should be able to make

bazel build //tensorflow/python/some:test

only build the C++ bits needed for that test and nothing else. So, instead of compiling all kernels and generating huge libraries and then generating all TF Python API we’ll only compile the needed kernel and generate a small subset of TF that provides the vertical needed for this test

1 Like

I had a similar idea but I supposed that the refactoring impact in the bazel dependecies was quite the same (and so too big to start to work on this without a pre-approval).

Is this simpler?

1 Like

E.g. Yesterday I was working on and

If you query the dependencies it is really hard to find a min cut point in the graph :

bazel query "deps(//tensorflow/python/eager:def_function)  --output graph"

bazel query "deps(//tensorflow/python/eager:def_function_test) --output graph"

bazel query "buildfiles(deps(//tensorflow/python/eager:def_function)) --output package"

bazel query "buildfiles(deps(//tensorflow/python/eager:def_function_test)) --output package"

Probably it is not the easiest target or the best query command but I don’t think that it is quite easier to find a min-cut also for other targets.

1 Like

It’s not easy right now. I estimate ~2 quarters worth of work to fix this issue but I hope the gains would justify this time

1 Like