Skip to content

JupyterHub App

About

JupyterHub brings the power of notebooks to groups of users. It gives users access to computational environments and resources without burdening the users with installation and maintenance tasks.

How it works

The JupyterHub app is run as a container (like any other Cloudron app). The hub app manages user login and creates a separate container for each user's notebooks. The notebook container is created from the c.DockerSpawner.container_image setting (see below on how to customize this). Each notebook container is run with a configurable memory limit based on c.Spawner.mem_limit. The advantage of this approach is that you can control how much compute/memory is allocated to each user and a notebook cannot bring down the whole server.

If you change the notebook image or any configuration, the notebook containers have to be "recreated". To help with this, the /app/code/remove_notebook_containers.py script can be run. Note that this only removes the containers but not the user's notebooks itself.

Selecting a notebook image

By default, the app uses the jupyter/datascience-notebook. The upstream Jupyterhub project maintains many other notebook images.

To use a different notebook image, use the File Manager to place custom configuration under /app/data/customconfig.py. For example, add a line like below:

c.DockerSpawner.container_image = 'jupyter/all-spark-notebook:77e10160c7ef'

It is also possible to use any arbitrary docker image built from the jupyter/base-notebook. For this, build an image from Dockerfile with FROM jupyter/base-notebook, push it to Dockerhub and update the image field above.

To apply the configuration, restart the app using the Restart button.

Remove existing notebook containers

For the container image to take effect, you have to remove any existing docker notebook containers using the /app/code/remove_notebook_containers.py script. Notebook data will be intact despite deleting the container.

Notebook Memory limit

By default, notebooks are given 500M (including swap). This can be changed by editing /app/data/customconfig.py.

c.Spawner.mem_limit = '1G'

To apply the configuration, restart the app using the Restart button.

Remove existing notebook containers

For the memory limit to take effect, you have to remove any existing docker notebook containers using the /app/code/remove_notebook_containers.py script. Notebook data will be intact despite deleting the container.

Notebook persistence

All notebooks are part of the application backup and persisted across updates.

Libraries installed using conda are not part of the backup and are part of the notebook container. Idle notebooks are shutdown over time but they are not destroyed. This means that if any libraries installed in notebook container will generally persist.

If the notebook image is changed, the old notebook containers are destroyed. This means that any libraries that were previously installed have to be re-installed.

Extensions

It's possible to enable and install extensions. However, as note in Notebook persistence, the extensions installed using pip or conda are not part of the backup and thus they need to re-installed when the notebook image is changed.

Other custom configuration

Use the File Manager to place custom configuration under /app/data/customconfig.py.

See the docs for more information.