Try our new documentation site (beta).
Filter Content By
Version
Text Search
${sidebar_list_label} - Back
Filter by Language
Next: Using a Separate Distributed Up: Distributed Algorithms Previous: Running a Distributed Algorithm
Submitting a Distributed Algorithm as a Batch
With a Cluster Manager, you can also submit your distributed MIP and concurrent MIP as a batch using the batch solve command. Distributed tuning is not yet supported. Here is an example:
./grbcluster batch solve DistributedMIPJobs=2 misc07.mps info : Batch f1026bf5-d5cf-44c9-81f8-0f73764f674a created info : Uploading misc07.mps... info : Batch f1026bf5-d5cf-44c9-81f8-0f73764f674a submitted with job d71f3ceb...
As we can see, the model was uploaded and the batch was submitted. This creates a parent job as a proxy for the client. This job will in turn start two worker jobs because we set DistributedMIPJobs=2. This can be observed in the job history:
> grbcluster job history --length=3 JOBID BATCHID ADDRESS STATUS STIME USER OPT API PARENT d71f3ceb f1026bf5 server1:61000 COMPLETED 2019-09-23 14:17:57 jones OPTIMAL grbcluster 6212ed73 server1:61000 COMPLETED 2019-09-23 14:17:57 jones OPTIMAL d71f3ceb 63cfa00d server2:61000 COMPLETED 2019-09-23 14:17:57 jones OPTIMAL d71f3ceb
Next: Using a Separate Distributed Up: Distributed Algorithms Previous: Running a Distributed Algorithm