A few months ago, Gurobi set out to answer a simple question: Can a large language model (LLM) provide helpful answers to technical questions about building and solving mathematical optimization models with our solver?
Until recently, I’m sure the question would have sounded absurd—you’d probably have as much luck asking your toaster as you would asking a website. However, with the latest generation of LLMs (ChatGPT 4.0, in particular), we’ve been surprised to find that these systems have reached a point where they can now provide useful answers to some very sophisticated technical questions.
This surprising result inspired us to investigate further. While out-of-the-box ChatGPT could provide reasonable answers fairly often, it typically required the questions to be phrased very carefully. But by providing ChatGPT with some context up front, we saw it provided really solid answers—and quite a bit more often than when no context was given.
Seeing this difference in results inspired us to build Gurobot—a bot on top of ChatGPT that bakes in this contextual information.
To test the effectiveness of this bot in a less controlled environment, we let it loose on questions from the Gurobi Community Forum, again with surprising results.
For a substantial fraction of user questions, which are often posed with ambiguous or missing information, the bot produced responses that we thought were good enough to stand alone. We left those answers untouched, and users appeared happy with the responses.
One of the highlights of this experiment came from a user question titled, “Please correct my code.” The question included 67 lines of dense Python code and a request for someone to tell the author what was wrong with it.
Imagine our surprise when we copied the code into Gurobot, hit ‘return,’ and watched as it instantly identified two issues in the code. We forwarded Gurobot’s answers to the user, who replied: “Wow. It works.”
Wow, indeed!
While we have seen impressive results so far, we must include a note of caution: We have also seen situations where Gurobot hallucinates. In one recent example, Gurobot invented an entirely fictitious Gurobi API, claiming that the user could simply add calls to this non-existent API to solve their problem.
One fascinating thing about LLMs is their lack of contrition: When we told ChatGPT that its answer was incorrect, it tried to run the gurobipy code it originally proposed, acknowledged that it couldn’t run it (naturally, since the API didn’t exist), then formulated and tested a new response that was actually correct.
Despite these limitations, we’re finding Gurobot to be useful internally for getting quick answers to questions, and for creating and running short code examples to do simple things.
Getting instant responses to optimization questions can be a huge time saver. And because this tool has proven useful for us, we’ve decided to make it available to everyone.
Once you’ve signed up for a ChatGPT account (free or paid), you can access Gurobot here. Please note that all data you send to it will go to OpenAI (not Gurobi).
You probably noticed that we used the word ‘surprise’ a lot in this post. We suspect you’ll be surprised too by how often Gurobot produces helpful responses. Please give it a try and let us know what you think in our Gurobot community forum.
Chief Scientist and Chairman of the Board
Chief Scientist and Chairman of the Board
Dr. Rothberg has served in senior leadership positions in optimization software companies for more than twenty years. Prior to his role as Gurobi Chief Scientist and Chairman of the Board, Dr. Rothberg held the Gurobi CEO position from 2015 - 2022 and the COO position from the co-founding of Gurobi in 2008 to 2015. Prior to co-founding Gurobi, he led the ILOG CPLEX team. Dr. Edward Rothberg has a BS in Mathematical and Computational Science from Stanford University, and an MS and PhD in Computer Science, also from Stanford University. Dr. Rothberg has published numerous papers in the fields of linear algebra, parallel computing, and mathematical programming. He is one of the world's leading experts in sparse Cholesky factorization and computational linear, integer, and quadratic programming. He is particularly well known for his work in parallel sparse matrix factorization, and in heuristics for mixed integer programming.
Dr. Rothberg has served in senior leadership positions in optimization software companies for more than twenty years. Prior to his role as Gurobi Chief Scientist and Chairman of the Board, Dr. Rothberg held the Gurobi CEO position from 2015 - 2022 and the COO position from the co-founding of Gurobi in 2008 to 2015. Prior to co-founding Gurobi, he led the ILOG CPLEX team. Dr. Edward Rothberg has a BS in Mathematical and Computational Science from Stanford University, and an MS and PhD in Computer Science, also from Stanford University. Dr. Rothberg has published numerous papers in the fields of linear algebra, parallel computing, and mathematical programming. He is one of the world's leading experts in sparse Cholesky factorization and computational linear, integer, and quadratic programming. He is particularly well known for his work in parallel sparse matrix factorization, and in heuristics for mixed integer programming.
GUROBI NEWSLETTER
Latest news and releases
Choose the evaluation license that fits you best, and start working with our Expert Team for technical guidance and support.
Request free trial hours, so you can see how quickly and easily a model can be solved on the cloud.