Parallelism and batch jobs ==================================================================================================== Simple example ---------------------------------------------------------------------------------------------------- In order to get your answers as quickly as possible, you probably want to run some kind of parallelism on your jobs. The best way is to use the :obj:`wait_for_model` from :obj:`options`. This basically tells your request to wait on an available model. So when running large jobs it gives time the API to ramp up available ressources to your requests no matter the parallelism you are running. The initial requests for the first few minutes will be stalled a bit more, so expect requests to run a bit slower initially, then as the API scales up, requests should run as fast a single requests and you can process your load as fast as possible. Here is a small example: .. only:: python .. literalinclude:: ../../../tests/documentation/test_parallelism.py :language: python :start-after: START python_parallelism :end-before: END python_parallelism :dedent: 8 .. only:: curl .. literalinclude:: ../../../tests/documentation/test_parallelism.py :language: bash :start-after: START curl_parallelism :end-before: END curl_parallelism :dedent: 8