This is a significant release for Shiny, with a major new feature that was nearly a year in the making: support for asynchronous operations!

Without this capability, when Shiny performs long-running calculations or tasks on behalf of one user, it stalls progress for all other Shiny users that are connected to the same process. Therefore, Shiny apps that feature long-running calculations or tasks have generally been deployed using many R processes, each serving a small number of users; this works, but is not the most efficient approach. Such applications now have an important new tool in the toolbox to improve performance under load.

Shiny async is implemented via integration with the future and promises packages. These two packages are used together:

1. Use future to perform long-running operations in a worker process that runs in the background, leaving Shiny processes free to serve other users in the meantime. This yields much better responsiveness under load, and much more predictable latency.
2. Use promises to handle the result of each long-running background operation back in the Shiny process, where additional processing can occur, such as further data manipulation, or displaying to the user via a reactive output.

If your app has a small number of severe performance bottlenecks, you can use this technique to get massively better responsiveness under load. For example, if the httr::GET call in this server function takes 30 seconds to complete:

server <- function(input, output, session) {
r <- reactive({
httr::GET(url) %>%
httr::content("parsed")
})

output$plot <- renderPlot({ r() %>% ggplot(aes(speed, dist)) + geom_point() }) }  then the entire R process is stalled for those 30 seconds. We can rewrite it asynchronously like this: library(promises) library(future) plan(multisession) server <- function(input, output, session) { r <- reactive({ future(httr::GET(url)) %...>% httr::content("parsed") }) output$plot <- renderPlot({
r() %...>% {
ggplot(., aes(speed, dist)) + geom_point()
}
})
}



Even if the httr::GET(url) takes 30 seconds, the r reactive executes almost instantly, and returns control to the caller. The code inside future(...) is executed in a different R process that runs in the background, and whenever its result becomes available (i.e. in 30 seconds), the right-hand side of %...>% will be executed with that result. (%...>% is called a “promise pipe”; it works similarly to a magrittr pipe that knows how to wait for and “unwrap” promises.)

If the original (synchronous) code appeared in a Shiny app, then during that 30 seconds, the R process is stuck dealing with the download and can’t respond to any requests being made by other users. But with the async version, the R process only needs to kick off the operation, and then is free to service other requests. This means other users will only have to wait milliseconds, not minutes, for the app to respond.

### Case study

We’ve created a detailed case study that walks through the async conversion of a realistic example app. This app processes low-level logging data from RStudio’s CRAN mirrors, to let us explore the heaviest downloaders for each day.

To load test this example app, we launched 50 sessions of simulated load, with a 5 second delay between each launch, and directed this traffic to a single R process. We then rewrote the app to use futures and promises, and reran the load test with this async version. (The tools we used to perform the load testing are not yet publicly available, but you can refer to Sean Lopp’s talk at rstudio::conf 2018 for a preview.)

Under these conditions, the finished async version displays significantly lower (mean) response times than the original. In the table below, “HTTP traffic” refers to requests that are made during page load time, and “reactive processing” refers to the time between the browser sending a reactive input value and the server returning updated reactive outputs.

Response type Original Async Delta
HTTP traffic 605 ms 139 ms -77%
Reactive processing 10.7 sec 3.48 sec -67%