Vegetation wildfires occur in most parts of the world, sometimes with disastrous effects. Wildfire incidents could be mitigated if timeous and informative notifications are disseminated to appropriate parties. Hundreds of detections might be obtained from an intermittent data stream of active fire data, detected from earth observation satellites and might be disseminated to hundreds of parties at any satellite overpass. Ideally, when an interested or affected party receives a wildfire notification, the receiver should immediately be able to link to the visualisation resource via a web connected device, i.e. the visualisation should be available on demand or in ‗demand-time‘. To achieve demand-time results, datastreams of wildfire events need to be processed rapidly in relation to large datasets of contextual variables. Failure to do so would result in processing backlogs and cause a delay in the generation of three-dimensional (3D) visualisations. The primary purpose of this research was to evaluate the efficiency of different algorithmic and architectural styles for process chaining in the cloud to generate 3D spatial context visualisations around detected active fires in demand-time. This study investigated efficiencies across four dimensions: 1) software libraries, 2) tightly-coupled and serial versus loosely-coupled and distributed geoprocessing chain implementations, 3) standardised geoprocessing web service (OGC WPS) implementations versus bespoke software solutions and 4) system deployment in a cloud versus system deployment on a single thread on commodity hardware. Geoprocessing chains were implemented in Python using open-source libraries and frameworks. Results indicate that loading objects in memory by using a software library is more efficient than using a spatial database. Loosely-coupled distributed geoprocessing is faster than tightly-coupled serial geoprocessing. Web Processing Services cause a significant deterioration in the performance of the geoprocessing chain. Overall, geoprocessing with Web Processing Services on a single thread on commodity hardware does not deliver demand-time results. Demand-time could only be achieved with bespoke software solutions or with a larger number of cloud instances that was not cost effective. The cloud computing paradigm can be evaluated due to free or less costly options. As the number of cloud instances increased, the performance of the geoprocessing chain improved. Therefore, demand-time results can be achieved when using the optimal number of cloud instances to conduct geoprocessing. However, there is a trade-off between the number and/or size of instances and costs.