Try our new documentation site (beta).
Performance Considerations on a Wide-Area Network (WAN)
While using Gurobi Compute Server doesn't typically require you to make any modifications to your code, performance considerations can sometimes force you to do some tuning when your client and server are connected by a slow network (e.g., the internet). We'll briefly talk about the source of the issue, and the changes required to work around it.
In a Gurobi Compute Server, a call to a Gurobi routine can result in a network message between the client and the server. An individual message is not that expensive, but sending hundreds or thousands of messages could be quite time-consuming. Compute Server does a few things to reduce the number of such messages. First, it makes heavy use of caching. If you request an attribute on a single variable, for example, the client library will retrieve and store the value of that attribute for all variables, so subsequent requests won't require additional communication. In addition, our lazy update approach to model building allows us to buffer additions and modifications to the model, You can feel free to build your model one constraint at a time, for example. Your changes are communicated to the server in one large message when you request a model update.
Having said that, we should add that not all methods are cached or buffered. As a result, we suggest that you avoid doing the following things:
- Retrieving the non-zero values for individual rows and columns of the constraint matrix (using, for example, GRBgetconstrs in C, GRBModel::getRow in C++, GRBModel.getRow in Java, GRBModel.GetRow in .NET, and Model.getRow in Python).
- Retrieving individual string-valued attributes.
Of course, network overhead depends on both the number of messages that are sent and the sizes of these messages. We automatically perform data compression to reduce the time spent transferring very large messages. However, as you may expect, you will notice some lag when solving very large models over slow networks.