[Cplex] Cplex: suggested topics for discussion on the next teleconf.

Torvald Riegel triegel at redhat.com
Wed Jun 19 13:57:52 CEST 2013


On Tue, 2013-06-18 at 15:12 -0700, Tom Scogland wrote:
> ​At least in my mind, composability requires at least one more level
> though. A user should be able to limit the scope of the concurrency of
> their program, and preferably of libraries they call as well. The
> example that jumps to mind is a NUMA machine with 4 dual core CPUs
> spawning eight threads, allocating memory on each memory node and
> spawning tasks to work on that data. I want a way to tell the system
> to limit tasks to being run on threads on each die.  Will that always
> be right? No, but it needs to be an option.  If it isn't, then where
> are we? We're stuck with opencl before the fission extension, where
> all cores get used regardless of what the user wants, and that's not a
> good place for CPUs. 
> 
I think this kind of input to the schedulers doesn't conflict with how
Cilk manages execution.  I suggest having a look at the discussions
around Executors in C++ SG1, which are in its essence about how to let
users control how resources are used.  Those executors (or, the idea
behind them) could probably be supplied as optional input to Cilk.

I also see a difference in what you said you wanted vs. exposing the
resource usage directly: You seem to want to give a hint or perhaps also
a requirement that is strictly about performance, but cannot affect the
correctness of the program; in contrast, when OpenMP, for example,
exposes threads, the program can rely on it.

It would probably also be helpful if you couldn't just tell the
scheduler that you want to limit resource usage, but *why*.  In your
example, I guess you want to express that your tasks benefit from
locality more than just running anywhere.  The reason why you want a
certain resource usage are likely more helpful for the scheduler when
negotiating how to use resources with everything else that uses them
(e.g., other applications).

Torvald




More information about the Cplex mailing list