Configuring cron_jobs

The following table shows the config variables used to set individual cron_jobs inside nebula (via the API):

Tip

The tables slides to to side for viewing it's full information (not clear do to the rtfd theme)

field type example value default value description
env_vars dict {"test": "test123"} {} a dict of envvars that will be passed to each work containers, use {} for non
docker_image string nginx none - must be declared what docker image to run
running bool true true true - cron will be enabled, false - stops said cron
volumes list [] [] what volumes to mount inside the containers ,follows docker run -v syntax of host_path:container_path:ro/rw, use [] for non
devices list [] [] what devices to grant the containers access ,follows docker run --device of host_path:container_path:ro/rwm, use [] for non
privileged bool false false true - cron gets privileged permissions, false - no privileged permissions
schedule string 0 * * none - must be declared the schedule of the cron to run, follows standard linux cron schedule syntax
networks list ["my_network"] ["nebula", "bridge"] the networks containers will be part of, if you wish to use "host" or "none" networks make sure that they are alone in the list (["host"] or ["none"]), a default bridge user network named "nebula" is also added in any case which is not "host" or "none", the "nebula" network can either be added explicitly (["nebula", "example-net"]) or implicitly (["example-net"]) and will also be added if an empty list is set ([]), note that aside from the default "nebula" network (& Docker default networks of "bridge", "none", "host") you will have to create what networks you wish to use manually on the worker prior to using them, the main advantage of the default "nebula" user network over the "bridge" network is that it allows container name based discovery so if you have a server with 2 containers named "con1" and "con2" you can ping from "con1" to "con2" just by using "ping con2" from inside "con1"