Gurobi’s powerful MIP algorithm allows you to add complexity to your model to better represent the real world, and still solve your model within the available time.

  • Public benchmarks consistently show that Gurobi finds proven-optimal solutions faster than competing solvers.
  • The performance gap grows as model size and difficulty increase.
  • Gurobi has a history of making continual improvements across a range of problem types.
  • Gurobi is tuned to optimize performance over a wide range of instances.
  • Gurobi is tested thoroughly for numerical stability and correctness using an internal library of over 10,000 models from industry and academia.

 

The Gurobi library consists of over 10,000 commercial models sourced from academia and our industry prospects & customers. We test each optimization we make to the solver, so we know that each new version of Gurobi is delivering meaningful, powerful performance improvements to our users.

See how Gurobi performance has grown with each new release.

Gurobi 9.5 Comparison

 

Why Benchmarks?

Benchmarking is an important aspect of evaluating a solver, and public benchmarks can certainly provide a very useful perspective in your evaluation process. When looking at any benchmark test, there are some critical points to consider in order to truly understand, evaluate, and select which solver is best for you.

We firmly believe that our software and our library are the most robust on the market—and we consistently win almost every major public benchmark test. Unfortunately, if we test competing solvers against our library, competitor licensing restrictions prevent us from publishing the results.

 

Proven Speed and Accuracy

Benchmark results can fluctuate over time as companies introduce new versions of their solvers. With few exceptions, the Gurobi Optimizer consistently wins in public benchmark test results, showing the:

  • Fastest times among linear programming (LP) solvers
  • Fastest times among mixed-integer programming (MIP) solvers
  • Fastest times among quadratic programming (QP) and quadratically-constrained programming (QCP) solvers
  • Fastest times for detecting infeasibility

Gurobi keeps getting better and better with each version.

Tips for Evaluating Benchmarks & Solvers

Double-check the defaults

Because benchmark tests are usually run using a solver’s default settings, it’s important to understand what those defaults are. But because defaults are chosen to provide the best overall performance across a range of models, they’re often not optimized for a particular model.

Understand benchmark tests in context of their defaults. Use them as a starting point, and ultimately test solvers against your own models.

Dig deeper than face-value

Some benchmark tests can be misleading – intentionally or not. If a company cherry-picks models and tunes their solver for that subset of models, they may be able to claim superiority over recognized industry-leading solvers. With a deeper look, you may find that the selected model is only academic in nature and not reflective of the real world, or that tuning the opposing solver would result in a much better performance than indicated by the test parameters.

Make sure the results you’re seeing aren’t being manipulated or misconstrued to appear more impressive than they are.

Look for meaningful measures

It’s important to determine whether a test measures something that is meaningful to you in practice. A test that measures the time required to produce poor-quality solutions isn’t relevant if your application requires high-quality solutions.

Evaluate the benchmark test and the solver’s performance based on the problems and models you need to solve.

Tune the parameters

When testing a solver, you need the opportunity to tune performance to your specific models. Gurobi includes over 100 parameters to adjust, and an Automatic Tuning Tool that intelligently explores parameter settings and returns with advice on specific settings you can use to optimize the solver for your model(s).

Using default settings, Gurobi has the fastest out-of-the-box performance. By using the Automatic Tuning Tool to tune the parameters for each individual model, mean performance across the models increases by 68%. Our distributed tuning capabilities show a 152% performance improvement in the same amount of tuning time.

 

Guidance for Your Journey

Gurobi: Always Free for Academics

We make it easy for students, faculty, and researchers to work with mathematical optimization.

Trusted Partners, at Your Service

When you face complex optimization challenges, you can trust our Gurobi Alliance partners for expert services.

We’ve Got Your Back

Our global team of helpful, PhD-level experts are here to support you—with responses in hours, not days.

What's
New at Gurobi