Setup your favorite editor to develop Keras

Here is list of editors that you can use to develop Keras.
The steps to setup each of them is provided.
Welcome to reply to this topic to add more.

GitHub Codespaces

This is the easiest option. It helps you setup the environment with one click.

You can click “Code → new codespace” on your fork’s web page to open it in GitHub Codespaces.
You can start coding and running the tests there right away.
However, Codespaces is only available in beta. You need to request early access to use it.

Visual Studio Code

This is also an easy option for beginners.

Clone your fork of the repo to your computer.
Open Visual Studio Code.
Install the Remote-Containers extension.
Open the cloned folder and click “Reopen in Container” in the popup notification.
You can start coding and running the tests there right away.

2 Likes

Thank you,
I think we are missing the linting tools in keras/requirements.txt at master · keras-team/keras · GitHub
See my comment at https://github.com/keras-team/keras/pull/15006#pullrequestreview-716500818

If it is complete you can close my June contribution offer at Vscode/Github codespaces

We are also waiting for the same for TF core at Tensorflow with Gitbub Codespaces.

For TF Addons it was rejected more then 1 year ago but probably we could re-valuate it:

Since May 2020 we maintained the .devcontainers for TF, Keras (not standalone) and TF Addons at:

1 Like

Thank you,
we are missing the linting tools in keras/requirements.txt at master · keras-team/keras · GitHub
See my comment at https://github.com/keras-team/keras/pull/15006#pullrequestreview-716500818

If it is complete you can close my contribution offer at Vscode/Github codespaces

We are also waiting for the same for TF core at Tensorflow with Gitbub Codespaces.

For TF Addons it was rejected more than 1 year ago but probably we could re-valuate it:

I’ve tried the the new Keras .devcontainer

I think using that locally with remote containers is not so much usable with a Keras local checkout as you are going to create by default all the build files and new file with the root user/permission but on the host directory. So when you are back on your host you will find many root files and folders.
This is why we have already discussed with the SIG-build to add “a standard” UID/GUID in the Docker image:

I think also that requirements.txt is quite heavy and so it lags on the Codespace/Container bootstrap as it requires to install tf-nightly (so the GPU wheel) and its dependencies on every new Codspace/Docker container instance that you launch.
Codespaces also are CPU only VMs so you have also the useless overhead of a GPU image + GPU wheel downloads before you can start to code something.

Isn’t better to rely on Keras nightly docker images, CPU only for Codespaces or for local CPU only machines, where these dependencies are already installed?

Just a side note, I don’t know if anyone here could get in contact with the Bazel team.

We are suffering a little bit of usability in Codespace/Vsocde for the missing test integration in the Google official VsCode Bazel extension:

SIG-build to add “a standard” UID/GUID in the Docker image…

@Bhack +1 for the standard UID/GUID, is it “user google” ?

usable with a Keras local checkout…

Some anecdata… I was able to set up Keras on a local Docker image, build Keras and run a test, following the Keras Contributing.md guides from @Scott_Zhu in 12 min with TF2.6. This compares to 4 hrs plus for full TF. Wow.

Caveats… 2.6GHz MBP with git, vscode and docker preinstalled; started in the GitHub UI cloning keras-team/keras then pressing “.” in the browser to launch VSCode web UI, this allows for local install & build of devcontainer via local VSCode.

I have already a “default” user at:

We could give it the name that we want.

I don’t think we have too much alternative solutions now as this Is still open since 2013:

It Is really different as Tensorflow is python, c++ and all the third_party dependencies that we compile from source (e.g. llvm etc.).

We need to invest time on this to have a similar experience in Codespace/VsCode remote-containers:

We are indeed missing the linting tools.
I am working on that.
Will update the contributing guide afterwards for the linting instructions.

The Keras nightly docker image sounds a good solution.
I will see if it works or not.
UPDATE: I found a tf-nightly image, we will see if that works.
There is no keras-nightly image.

Any suggestions for the file owner permission issue?

As I have already mentioned in the previous post we don’t have too much solution at Docker upstream:

As you can see in my PR mentioned in the previous posts I’ve just used the official trick to add an user like we already had in other officials Microsoft devcontainers on Github

Yes, consider that also GPU and CPU images size diffs will make enough download impact when you will need to quickly open a Codespace or a Container to just contribute a PR.

Also `“postCreateCommand”: instead of using a final layer in the nightly image it will creare an overhead on every new container bootstrap also when the image is already available on the host.

I am not sure if the -e also works with devcontainer.
I used it to map the user group and IDs in the docker container so that they would be the same owner. (for my docker vim env, not vscode)

We “talked” about this some times ago at:

I don’t remeber if It worked in the devcontainer but generally it could be a problem without having the real user in the image/container.

If I remeber correctly I had some specific problem with the bazel cache permission with a persistent volume also on a local setup (VSCode + remote container extension).

Check also the official documentation:

@haifeng Any news on these topics?

In the meantime I’ve created a small fix:

But you need to tell me what you want to do with the two other discussed issues:

  • no root-user file permission on the source mounted volume
  • use a Keras nightly image instead of manually installing and updating tf-nighlty every time in every container

P.s. in the long term probably we will have a native solution for the first point with Kernels >= 5.12 but in the mean time I think that we could use the standard solution to add a non-root user.

@Bhack
Sure we can add a non-root user, as long as it works well for both vscode and standalone docker env.
For the nightly image, I don’t think we have a keras-nightly.
We can use a tf-nightly, but I am not sure if it works well with the SSH authentication for GitHub if using codespaces.

Would you help us make this changes? I feel you are more familiar with these setup than me. : )

Sure we can add a non-root user, as long as it works well for both vscode and standalone docker env.

I’ve update the PR

For the nightly image, I don’t think we have a keras-nightly.
I think that tf-nighlty Tensorflow image, so GPU version, is a little bit too large just for Keras.

You could transform the postCreateCommand in a Dockefile layer but requirements.txt is out of the Dokerfile context.

If we don’t have a clear knowledge of the breaking change on the Keras ft-nighlty dependency we need to install `tf-nighlty’ wheel every day.

At least you can find a solution to install tf-nightly-cpu to lower this daily overhead when we are on Codespace or on CPU only machine as not all the PRs have a GPU requirements.

Yes, I think it is a good idea.
If the contributor doesn’t make any GPU related change, they can always uninstall tf-nightly and install tf-nightly-cpu for future updates.

I don’t think we can change anything in requirements.txt. I believe it has to be the GPU version of TF to run some of the tests.

So is the large tf-nightly image itself causing any issue?
It is like either use a large image, or install a large package on startup.

The difference Is that with the postcreatecommand you have this overhead/lag for every container you launch instead after the image/layer is downloaded/cached the first time you don’t have this overhead anymore.

Partially this advantage is invalidated if we ask to contributors to update tf-nightly or a tf-nightly layer in the image every single day.