[Cplex] Cplex: suggested topics for discussion on the next teleconf.

Hans Vandierendonck h.vandierendonck at qub.ac.uk
Wed Jun 19 14:10:57 CEST 2013

My two cents:

I fully agree with the need for a common parallel runtime across languages. One could go even further and ask for a common runtime across applications, allowing applications to adapt their degree of parallelism to the current system load.

I like the distinction between language and runtime system. It will probably prove crucial.

I can imagine that a language extension for parallelism would be used to build libraries or interfaces that simplify or streamline the expression of parallelism for a particular group of programmers or projects. Such libraries already exist and serve different needs, think TBB, Qthreads (Sandia), Qt, Boost.Threads, ... At the moment these are typically implemented on top of pthreads. Would it be the goal of cplex to provide a language definition that subsumes all of these efforts, or is the goal to provide a language definition that can be used as a building block for such libraries and that is a better abstraction than the pthreads library in a number of ways?
The high-level question here is: What is the void that this language extension needs to fill?

There seems to be a need for different "user requirements" of parallel language extensions. Several opposing views have been expressed: a light-touch expression of parallelism vs. full control over the execution of parallel threads and data placement. Also task-orientation (typically Cilk) versus thread-orientation (OpenMP allows both task- and thread-orientation). I think that these distinct requirements are also reflected in the existence of libraries that provide different abstractions of parallelism. Roughly speaking there are two distinct approaches to work around opposing requirements:
1) Try to provide one common parallel language extension that reconciles all of these contradictory requirements. This sounds to me like redo-ing OpenMP but then 10 times harder because C/C++ has a much wider scope than OpenMP. I think this is what the Geva/Gove proposal attempts to do.
2) Pitch a common parallel language extension at a lower level (preferably more abstracted and better integrated with the language than pthreads) and invite various libraries that extend this language and use it to implement various behaviours and provide various guarantees that meet different user requirements. The added value of the language extension would be that these libraries would be able to co-exist by definition of the language, and by using the common runtime.

I think 2) needs to be solved before taking on 1). I also fear that if 1) is attempted then the end result would not be accepted as the single correct way forward by the community at large. The result would be too much of a comprise, and clearly just one way of doing things, that it would not push alternative libraries (TBB, Qt, Boost.Threads, …) out of the market. I may have a limited view of what a standard is, but if there is room for alternatives to the standard, then perhaps the standard is not good enough.

Working out 2) requires solving some hard problems that actually also need to be solved in the case of 1), but in the case of 1) it is easy to wipe them under the carpet and consider them part of the implementation of the runtime system. One example is composition of parallel regions. The challenge here would not be so much the functional correctness of composition, which should be automatic given the common language and runtime, but the performance aspect of composition. Important questions are how to assign a number of threads to a parallel region in a way that can adapt to the dynamic context in which the parallel region is called. This may involve taking away threads from a parallel region, or assigning new threads to the region while it executes. In my opinion, a parallel language should define an API to control sharing of threads between parallel regions. An application may perhaps choose to ignore this (I am thinking about HPC here where programmers like full control) but in other cases programmers would prefer to leave such issues to the system. In any case, it is important to define how such mechanisms may operate.

I would also like to raise the attention to determinism of programs, which has not been mentioned on this list before. Determinism states that any parallel execution will produce a functional result that is equivalent. It is probably impossible to guarantee determinism when providing a low-level view on parallel tasks or threads, however I believe it would be useful to define exactly the conditions under which a program would exhibit deterministic behaviour. This could include the set of controls on the runtime system that may be used without sacrificing determinism, the type of reduction variables that may be used, which concurrent thread interactions are allowed, etc.


On 19 Jun 2013, at 09:43, Tom Scogland <tom at scogland.com<mailto:tom at scogland.com>> wrote:

The way Herb put his arguments brings up an interesting point.  During the call there was a great deal of discussion with regards to the scope of our mandate, and whether this should be a language independent or C-only proposal, maybe both are true.

Specifically I'm referring to the statement "IMO we cannot live with a long-term requirement that applications use multiple schedulers."

I agree with that statement, and would further argue that it applies across more disparate languages than just C and C++.  It does not say anything about the actual parallel extension or specification however, just the runtime system.  The current merged proposal explores extensions for the expression of parallelism which are completely dependent on a runtime system to run efficiently, or at all in a concurrent context.  Even so, following from its Cilk roots, the proposal does not specify anything about that runtime system beyond that it will not violate the guarantees of the language level constructs.

If the interface of the runtime scheduler is specified such that it can be language independent, with a common design and layout for tasks or ranges of tasks, their corresponding data, dependencies and scheduler controls, that should be sufficient to allow for interoperability.  Note that I said the runtime scheduler, by which I mean concurrency manager and task scheduler, not parallel language extension.

Then a language specific syntax can be developed for C, C++, Ada, or any other language that could submit tasks.  Perhaps they could even be used to implement alternative runtime schedulers as well.  In the end, this gives us a C specific extension for parallelism that could be composed with similar systems in other languages, libraries and whatever else.

Clearly this is just my thought process, but the idea of designing what we need in two components, a scheduler API and a language extension, seems to solve several of the problems that have been nagging at me.  I think it would also provide a nice separation between the parallel specification and tuning/concurrency control as Clark suggested. The default could be to only use the parallel language extension, simply using whatever scheduler is the language/compiler/standard library default, allowing the system to do whatever it wants. But, if the user is so inclined, an alternative scheduler or control points on the default one could be tuned through an additional interface.

-Tom Scogland
"A little knowledge is a dangerous thing.
 So is a lot."
-Albert Einstein
Cplex mailing list
Cplex at open-std.org<mailto:Cplex at open-std.org>

Hans Vandierendonck
PhD, Lecturer (a UK lecturer is equivalent to a US assistant professor)
High Performance and Distributed Computing
School of Electronics, Electrical Engineering and Computer Science
Queen's University Belfast

Bernard Crossland Building
18 Malone Road


-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://www.open-std.org/pipermail/cplex/attachments/20130619/a11a05ad/attachment-0001.html 

More information about the Cplex mailing list