ISO/IEC JTC1 SC22 WG21 N3391 = 12-0081 - 2012-09-23Lawrence Crowl, email@example.com, Lawrence@Crowl.org
Monday 7 May 2012
Herb Sutter: Library Composability
Mads Torgersen: Asynchronous Operations (C# Library Level)
Mads Torgersen: Asynchronous Operations (C# Language Level)
Niklas Gustafsson: Asynchronous Operations (C++ Language)
Asynchronous Operations Wrapup
Jeffrey Yasskin: Executors
Division of Work between SG1 and SG4
Tuesday 8 May 2012
Justin Gottschlich: Transactional Memory
Robert Geva: The Serial Equivalence of Cilk Plus
Arch Robison: Serial Equivalence's Impact on Space Bounds
Artur Laksberg: PPL
Arch Robison: Concurrent Objects
Pablo Halpern: Thread-Local Storage
Wednesday 9 May 2012
Herb Sutter: Serial Equivalence and the Space of Solutions
Herb Sutter: Async and Thread-Local Storage
Robert Geva: Vector Parallelism
Herb Sutter: AMP
Alasdair Mackintosh: C++ Latches and Barriers
Lawrence Crowl: Proposal for counters
Lawrence Crowl: N3353-3 (Concurrent queues) progress
Lawrence Crowl: Stream mutexes
Artur Laksberg: Cancellation
Herb Sutter: Blocking in future destructor
Meeting opened at: 9:20 AM
Breaks at 10:30 and 2:30. Lunch at 12:00-1:15. Adjourn at 5:00. Subsequent days start at 8:30.
WiFi SSID, username, and password are on the whiteboard. The password will change daily.
No objections to agenda.
We need more libraries. To make that possible, they need to be independent. But completely independent libraries are not composable, but they need to be composable. A monolithic approach achieves composability, but does not scale.
Suggest define a set of foundation types that all libraries use. Hope in particular to have async operations in that set of foundation types.
Supporting materials are under the SG1 documents.
Threads are expensive. Threads may be limited. Need a way to handle more activies than have threads.
Where to continuations run? You can specify it. Default is to run on the thread-pool. The thread-pool has one thread per cpu, and will do thread injection. Can write your own synchronization context object.
Support composition of tasks. E.g. continue_with makes a task from a task and additional work. E.g. when_all makes a task returning an arrays of values from an array of tasks returning a value. E.g. when_any makes a task returning a task from an array of tasks returning a value.
Can use tasks as a foundation/currency in the libraries.
Significant opportunity to keep representation size low, which enables support for large numbers of tasks.
Tasks versus threads is in part driven by implementation costs.
Tasking sometimes requires transforming of sequential loops to recursion or callback chains over tasks. Can declare a method async. Can await a task inside an async method. The await appears synchronous, but is in fact adding a callback to the original tasks. Compiler does the transformation.
Return type of an async method must be either void or some task. Awaits have a wider variety of acceptable patterns.
Not the same as delimited continuations.
The await makes sure you get back in the "same place", which is the synchronization context, e.g. the UI thread.
Await has functionality of yield.
Tasks may not stay on the same thread, so await potentially interacts badly with thread-local state, such as held locks.
Compiler optimizations: Keep variables on stack until they must be moved to heap. Check to see if task is done and if so execute code normally.
The previous Microsoft primitive, iasync, was hard to use. Newer task/async primitives can be migrated into C++.
The C++ [shared_]future has some weaknesses with respect to task/await. Propose mechanisms to address those weaknesses.
Has task_coordination_context::use_current() to ensure tasks execute on the appropriate thread.
Has a mechanism for voluntary cancellation using tokens. Threads/tasks periodically poll the token to see if they have been canceled.
One needs more control over scheduling, so we need an interface to provide that control.
Resumable functions help composing with traditional control structures. Such functions identified at declaration for warnings, definition for compilation effect. Compiler transforms function definition to state machine. It does so at resumption and return points.
Much discussion of reference parameters and capturing.
Examples hand-compiling function using proposed mechanisms. Can short-circuit promise to initialize future directly.
Discussion of attributes of proposal. Much of the discussion was around the right strategy for integrating something along these lines into the standard. Is asynchrony better done with changes to I/O operations?
Futures may have limits to performance for large-scale parallelism. See Space-efficient scheduling of multithreaded computations.
Straw polls to 'ask author to do more work'.
Presentation of N3378 A preliminary proposal for work executors.
We have the following division of work.
Addressing concerns expressed at Kona meeting. Locks are impractical for generic programming, because ordering of locks is generally not visible. Transactional memory solves the problem. It also helps for fine-grained locking on irregular data structures.
Discussion on composability. Relaxed transaction can expose partial state. Discussion on interaction with existing locks.
Helps with read-mostly structures.
Presentation of student-based study of error rates with transactions versus mutexes and condition variables. Error rates fall from 50% to 10%.
Presentation of longer-term study of graduate students on an indexing application. Overall 14% improvement in overall programming effort with transactional memory. Good performance using transactional memory.
Is transactional memory fast enough? Many different software transactional memory systems with different performance characteristics, so probably one fits your needs. Recent work on adaptive algorithms. Can change system without changing client code. Unknown about ABI changes.
Performance study of a multiplayer game with >100k concurrent players. Requirement is atomicity and consistency for all actions.
Much discussion on appropriateness to standards, industry adoption, compatibility with existing code, meaning of a Technical Specification, possibility of a Study Group, etc.
No objection to creating a Study Group for transactional memory. We will create one. Chair to be selected.
Three keywords: cilk_spawn, cilk_sync, cilk_for. The serial elision of a Cilk program is well defined. A Cilk program without a determinacy race behaves the same as its serial elision when running on any number of threads.
Needs definition of a determinacy race. Is it a data race?
For most library solutions, and equivalent property is not well defined.
Example with race. Correct by using a Cilk-provided abstraction, e.g. cilk::reducer_list_append.
Language support for serial equivalence.
After Cilk became part of Intel, the implementation changed so that the ABI is the same between spawnable and non-spawnable functions.
Example of quicksort. Regular approaches require more space and more threads. Do depth-first serial execution and steal breadth first.
Discussion of equivalence to async/future.
Discussion of overhead of Cilk primitives on algorithms designed to be parallel.
Discussion on modifying or extending the standard library with Cilk.
Performance difference in that paper is unknown, but if inherent likely due to serial equivalence property.
Much discussion on performance tradeoffs, particularly with respect to additional guarantees.
Okay to call locks as long as they commute within the domain.
Discussion on alternate implementations and the burden of implementation.
Two basic tasks of compiler: injecting code around constructs and optimizing reducer based on source.
There is a macro/library equivalent of Cilk that is useful for evaluating and understanding Cilk. But it is not the library you are looking for. Move along.
Heavily uses lambda to represent work passed to PPL. Has task_group, structured_task_group, parallel_invoke, etc. Can '.run' a lambda in a group. Can '.wait' for all runs to finish in a group. Can '.run_and_wait' a lambda in a group.
Current standard library data structures are no concurrency safe.
extern concurrent_vector v; copy( u.begin(), u.end(), v.grow_by(u.size()) );
combinable objects provide reduction.
Much discussion on the specificity of data structures to patterns.
Three use cases for thread-local storage.
Fourth use case suggested: merge multiple single-threaded processes into a single multi-threaded process.
What should thread_local specify?
There are at least three candidates. Should be associated with std::thread.
Much discussion on "what is a thread", "what is a thread-local variable", "what about POD thread-local variables", etc.
Did I mention there was much discussion?
Less information requested means more optimizable. Less expressive or less execution execution guarantees means more optimizable.
Serial equivalence in "more information" and "less execution guarantees". Parent stealing in "less execution guarantees". Suggest less overlap in the bubbles is better because there are fewer cases of two ways to do the same thing. The space is actually more than two dimensions. Can we extend existing proposals to cover more of the space?
Benchmarking fib(30) example. The std::async with default launch policy is useful for parallel decomposition. The std::async with launch::async policy is useful for "get this work off my GUI thread".
Microsoft implemented async(launch::async) with thread pools. Reinitializes thread-locals. Does not handle destruction.
Extensive discussion of semantics and implementation of thread-local variables. Did I mention extensive?
Core counts on servers will continue to grow. Core counts on (mobile) clients are not likely to get much higher. Vectors are getting wider.
Ability to program tasks versus vectors explicitly. Ability to express intent for parallel execution and let compiler map to hardware resources.
Several currently available technologies for vectorization.
Discussion of appropriate syntax and semantics for vectorization.
Need language support for efficient code.
Data parallelism is a programming pattern; vectorization is an implementation.
Has parallel_for_each over a grid (set of indices).
Lawrence presented the paper, talking about latches.
Jeffrey: Would be useful to be able to wait on several latches.
Jeremiah: Do we want a wait that returns a future?
Lawrence: Point of all these things is to be very low overhead.
Jeffrey: Still nice to express semantics.
Lawrence continued to present, showing the barrier portion of the paper.
Bartosz: All these function pointers return and take void. Do you need side effects?
Lawrence: No, can pass in a lambda.
Lawrence asked whether people are interested, and what they think of reset in the barrier.
Jeremiah: Is there a try_wait?
Lawrence: No try_wait.
Jeremiah: Would be nice to have a non-blocking version.
Olivier: Do barrier in HW for threads that are concurrent on the chip. Barriers very useful, should have one. When it starts being specified, one would want to ensure that it can be HW-accelerated when the conditions are suitable.
Olivier: Related to the vector work earlier, but a facet we didn't talk about - when you launch a number of data-parallel work items, implementation must provide a barrier. If group is small enough, can just be a "bar" instruction on their machine. Should take this into account in a specification.
Hans: Perhaps at odds with what Olivier just said, there is a java.concurrent "phaser" interface that seems very similar to this with a lot of experience.
Lawrence: Wouldn't be terribly surprised if we ended up deciding we needed two.
Stephen: Ada is implementing simple barriers at this time. Looked at simple and complex barriers, decided complex were too much work in this standard but simple very close to what's proposed here.
Bartosz: When Lawrence says "performance of latch is better" is that an intrinsic thing or processor-specific?
Lawrence: Latches are much simpler, and can implement barriers on latches, so they are inherently simpler.
Chandler: Finds it unfortunate this paper would have us end up with different interfaces for waiting until an event occurs. Worked hard to make future a useful specialization. Seems like there's a lot of overlap with latch there. Understanding that there may be some efficiency concerns, would be good to see if we can reuse that interface. Unfortunate to have two.
Lawrence: Would probably be a shared_future.
Chandler: Little awkward that these aren't copyable and no sharing ability; do you expect these to be wrapped up in a shared_ptr?
Lawrence: Expectation would be that there's a local object that's doing synchronization, so not really shared.
Chandler: Also not movable.
Lawrence: All of these have waits in them, wait based on condition variable, condition variables not movable. So not being able to be movable creeps in here this way.
Chandler: Don't understand the need for latch::count_down_and_wait() ?
Lawrence: Convenience feature.
Jeffrey: Also matches barrier.
Chandler: Seems superfluous in latch interface.
Jeremiah: Only makes sense if you have a try wait.
Clark: A few minutes ago Lawrence said "we may need two". What did he mean?
Lawrence: Java-style general "phasers" and low-level "hardware" barriers. The more phaser-style functionality we add, the more these will diverge and make it likely we need two.
Pablo: Hasn't seen a latch before. Reminds him of a semaphore. Is it more efficient than a semaphore? We don't have a semaphore in the standard right now.
Pablo: Seems like we should add semaphores in the standard since they are so useful. If we did have semaphores, would we still need a latch?
Lawrence: Because these are monotonic, there is an exploitable property that might be lost with a semaphore.
Lawrence: Semaphore is a general-purpose mechanism people use for different things. Doesn't communicate the specific use like a latch. But, that said, we can't standardize everything, so maybe we can get more general applicability with a semaphore and just rely on people documenting.
Pablo: For something this small, doesn't seem like a big deal to have it in, and does seem useful to express semantic intent.
Hans: Straw poll?
Lawrence: Seemed to see general agreement that the work should proceed. Still haven't heard about reset callback function though.
Pablo: Doesn't like that lambdas are the only things that will be useful there since it will be void.
Pablo: Would like to understand the intent better to see why lambdas may be sufficient.
N3355 C++ Distributed Counters
Lawrence explained what he has been doing with the counter proposal. No objections to continuing on current path.
N3353 C++ Concurrent Queues
Lawrence: Was asked to look at iterator adaptors in standard instead of queue having its own iterators.
Lawrence: Can use the iterator adaptors for one side of the queue. But none of the existing adaptors would work the other side of the queue.
Lawrence: Prefer to create another iterator adaptor that works for the other side, or stick with the current iterators?
Detlef: Should ask the LWG in Portland.
N3354 C++ Stream Mutexes
Lawrence explains stream mutexes.
Lawrence: Basic comment was, since there are filestream locks available in POSIX, let's use those.
Lawrence: Was able to make that work for stdin and stdout since there's a filestream upon construction, but not easy in other cases.
Lawrence: If you open a stream with a file, there's no way to get a FILE* to use the flock() on. With new standard, can get the fd but not the FILE*.
Lawrence: Even with stdin and stdout using file stream lock implies that you have sync_with_stdio, which is a separate property.
Lawrence: So, best thing we can likely do here is modify the streams to expose their own locking interface. Big downside is that this would probably require an ABI change. Not necessarily for streams associated with FILE* but for others.
Lawrence: So, don't see any general way to get those mutexes attached through to the files since you may not have a FILE*. Without that mechanism, at minimum doing a test to see which kind of thing you're using. Not sure it's worth it.
Hans: So, in on scenario, would export lock/unlock from a stream.
Lawrence: Yes, streams would export mutexes.
Michael: Finds the locking of streams to be very fragmented in the industry. Not great for a thread-safe environment for streams. Had major customer defects come in due to differences in vendor implementations.
Pablo: If it exported lock and unlock, would that be exported through the streambuf?
Lawrence: Yes, probably.
Pablo: In order to make that work, you'd have to change the ABI due to virtual interface?
Lawrence: Understands that in all existing ABIs, adding a new virtual function works as long as it's not an overload?
Pablo: Could see that. So where does the ABI breakage happen?
Lawrence: If can't use the fstream lock stuff, need to allocate a mutex inside the streambuf and that changes the size of a streambuf.
Pablo: (jokingly) could steal space from the buffer itself, right?
Richard: If those functions weren't there on the flip side, would just crash, right?
Lawrence: Doesn't believe he's the right person to push this through.
Pablo: Probably needs to go to LWG.
Lawrence: Found the class he has to be very useful. Available open source.
Lawrence: Not comfortable actually making changes in iostreams (vs a separate class). Would expect that if that happened, direction of that class would change.
Hans: Also an issue for passing streams around.
Lawrence: Right, would pass two objects instead of one. Not great. Clearly his proposal is a workaround for the lack of locking on iostreams.
Hans: Kind of an embarrassment. Need to fix.
Michael: Agreed. Never sure if the "Hello World" concurrency example is not going to be "World Hello" smile
Pablo: There is a different solution to this that has been casually mentioned on the reflector, which is to provide an atomic printf-style mechanism (e.g. boost::format).
Pete: There is an alternative to just hold a guard essentially while doing the IO.
Lawrence: Right, that's what his class does. Problem is that it's a separate object, so have to pass mutex wrapper along with the stream.
Hans: Could we use a global hash table, hashing on the stream.
Lawrence: Could do something like that, but don't know whether there's a file stream associated with these.
Some questions about details of necessary ABI changes.
Pablo: Could add a derived class, use dynamic cast?
Lawrence: There is no type for that, no guarantee of an associated file stream.
Lawrence: Will pass this on, but doesn't think he wants to take it up himself.
Artur described the overall idea, showing some code involving task cancellation.
Cancellation tokens - come from a cancellation token source. Every task that receives a cancellation token will be notified of cancellations, and can therefore react.
Stefanus: Does cancel_current_task() cause the task to stop executing?
Artur: Went back and forth on whether that throws an exception. Current state is that it doesn't - need to leave the flow of control yourself.
Hans: So what does cancel_current_task actually do?
Artur: State transition to indicate task is cancelled. Will not start continuations that were queued up.
Artur: t.wait() will return, t.get() will throw an exception since the task was cancelled.
Alex: Seems like this can be generalized, e.g. asking for pause and restart.
Artur: Interesting, wouldn't be part of the cancellation proposal.
Artur: Cancellation is more specific than those, but also more broadly applicable.
Alex: If we know we need cancellation now, since there may be other things, should we move this into something more general?
Ville: Could just add other kinds of tokens.
Jeffrey: In previous implementation, with a shared future, when the last future went out of scope, would automatically cancel the outstanding tasks.
Jeffrey: When you have a tree of tasks, may not want to cancel the root task until all children have cancelled.
Artur: Why would you cancel the tasks if the future goes out of scope?
Jeffrey: Assumes that the only purpose is to provide a result.
Artur: But what if you have side effects?
Lawrence: Could still notify the task that result value isn't needed anymore.
Artur: In their primitives, can cancel all the other tasks, very handy vs iterating through all the tasks.
Pablo: Cancellation token is an argument to the task constructor. Why is it both captured and passed as an argument?
Artur: Scheduler needs to be aware of the cancellation. [some detail not captured here]
Artur: Also provide helper functions like is_task_cancellation_requested() that don't require you to capture cancellation token.
Pablo: Is there a mechanism for aggregating cancellation token to check more than one cancellation token?
Niklas: Had in his proposal, but took it out.
Niklas: Distilling down what's actually proposed. If ignore sources, there's the cancellation tokens, which is just a canonical bool to say that things have been cancelled. By making this a canonical type, allows library functions to participate in this. Separate from things like cancel_current_task() which takes advantage of this canonical type.
Alex: In poco, have a task manager to do these kinds of things.
Michael: Does this kill the thread?
Artur: No, just a boolean essentially.
Michael: Have a similar mechanism for OMP cancellation. Their tasks are structured, so every task has to spin and check for the cancellation token. No button to press, but tasks must spin to check.
Artur: Have a similar mechanism in their implementation as well.
Artur: In VS2010, were almost completely happy with cancellation mechanics. Seems to work well in structured mechanisms, e.g. in a task group. In this case, have no such structure.
Michael: In OpenMP, considered all kinds of "killing".
Niklas: This isn't killing, just shunning.
Jeffrey: Do you always have to check, or can you be notified?
Artur: There is a way to subscribe to cancellation requests and be notified through a callback.
Jeffrey: Interacting with OS interface may make it important to be able to cancel a blocking call.
Lawrence: Is that callback essentially a signal? Async call into an existing thread?
Artur: It is synchronous ...
Jeffrey: Doesn't run in the thread being cancelled.
Niklas: Proposal for cancellation tokens themselves does not prescribe any particular behaviour on tasks.
Jeffrey: If we had the ability for latch to do wait_for_any or wait_for_all, could provide this kind of notification. So cancellation tokens would just be a latch of count one.
Chandler: Can't quite understand why we want these to be so separated from future. If you write an interface that accepts promise and fulfills it, would be nice to have just one interface to detect cancellation. Would be nice if that were on the promise itself. Wonder if need a separate token, rather than just a latch and a lambda.
Herb: General thought - already got into some trouble by trying to make future be a handle for the task. Future is really an asynchronous value. Ticket redeemable for an object of type T. Ideally should be no less or more than that. Seemed to have gotten into trouble when trying to do more with it than that. The fact that cancellation is at different granularities and somewhat independent - e.g. cancelling groups of tasks - seems to also argue for making it independent.
Chandler: Good argument to separate futures from tasks, but not cancellations from tasks.
Niklas: Kona proposal was essentially what Chandler suggested. Idea since then, based on prodding from Herb, is that these tokens are already being used for things other than tasks. So it makes sense to have them as a separate thing.
Chandler: Generalized cancellation out of future, but still cancellation - what if we want to signal something like continuation, like Alex described. Maybe we just need some generic mechanism for signaling - which gets us closer to latches.
Niklas: It's a common pattern. Even just cancellation is very common and very simple.
Artur: Even if we have a more general concept, cancellation needs to be part of that.
Chandler: Would like the idea of being able to inject a cancellation token / signal into promise during creation.
Niklas: Like in Arthur's example, where it's passed into task constructor.
Jeffrey: Cancellation itself is a common enough signal that it probably makes sense to have its own name, even if it's just a wrapper around the general mechanism. Clarifies the intent.
Pablo: Recent grad at MIT did some work in cancellation in the Cilk context, but probably not Cilk dependent. Starts with something similar to this, but, one thing to keep in mind is that sometimes the list of things you want to cancel might be a tree of tasks where one descendant finds a need to cancel, and things need to be propagated across the tree. Found an issue with just a single flag being used - Pablo will take a look and find more info.
Bartosz: Seems are rediscovering what is already known as channels in ML, mechanism to communicate between separate threads. Cancellation could go through channels, future could be coming through a channel as well. First class composable objects that generalize all this stuff.
Niklas: One simple thing about this is that it's monotonic. Goes from false to true but never the other way around. Like a latch that can't be reset, and that's useful for simplicity. That simplicity is why people use boolean flags.
Bartosz: Like mvar in Haskell
Jeffrey: Those can be set multiple time.
Chandler: Important to start with simple one.
Hans: Don't want to have to wait like you do in a channel, so don't want all the machinery there.
Jeffrey: Thinks this could be implemented with two words, like an int and a pointer that is usually null, and make this implementation very simple.
Artur: Their implementation is also very simple/unsophisticated.
Herb: Would like to get a sense in the room for proposals we aren't talking about, to make sure that people really don't want them. In particular thread kill and thread interrupt - would like to know if everyone agrees that these are evil and shouldn't be considered. One reason to mention this is that boost has implemented thread interrupt; want to know if people agree.
Ville: Has no problem with cooperative interruption - is the same as cooperative cancellation. But forceful interruption is different.
Niklas: If operations are asynchronous, underlying motivation for thread kill and thread interrupt kind of go away.
Hans: Java thread interrupt much more complicated, don't want to throw that in the same bucket.
Herb: Spent a lot of time in 0x talking about pthread-style thread cancellation, would like to know if we can move on.
Chandler: Nasty bugs with thread killing.
Herb: Thanks - got a clear answer.
Michael: Only narrowing to cooperative strategies, ignoring extreme strategies, right?
Herb: Right, better phrasing of his question.
Bartosz: What about program termination?
Jeffrey: Unrelated to this discussion; specification already says.
No objection to seeing a former proposal for this.
Herb presented the issue overall: for a future created from some work that never has get() called, does it block?
Chandler: Can each side explain their arguments?
Herb: Blocking is evil; defer to Artur.
Artur: E.g. UI thread can end up getting blocked.
Hans: Old paper that talks about this with threads and implicit blocking there. Classic example: after spawning a thread, but before calling join(), throw an exception. Could lead, for example, to access a stack variable that's gone out of scope.
Herb: Doesn't happen so much with PPL/TBB.
Stefanus: Feels early to decide on this. If people end up using
Artur: Already have this problem, e.g. creating an std::function and then returning it, when it references a local variable, with intention of executing it later.
Some discussion of this case and whether it's really the same.
Niklas: This is just one example of a case where you can do this kind of thing. If we want to fix this problem, there's a bigger problem to be fixed.
Niklas: If the destructors block, then futures as they're defined, are useful for decompositional work like this, but will not be suitable to represent async operations. Because, trying to get away from implied wait.
Hans: Would it be better if we were consistent with thread and call terminate?
Niklas: No. You want to do this intentionally. If you have a task that you are only doing for side effects, where you're throwing away the results, e.g. in GUI updates.
Hans: OK, because in that case you have some other way to ensure it completes before program termination.
Jeffrey: When futures were getting designed, was firmly in the camp of blocking and making sure that references don't escape. Since then, changed his mind and recognized the async .then() type cases more. Seems more useful to chase that. Don't mind the status quo, since he'll be telling people not to use async anyways, and control lifetimes with thread pools.
Herb: In general, these are just two ways to let things escape. Guidance in general he's giving people is if you're passing a lambda to just do something with locally (e.g. pass to an algorithm), then capture by reference is the right default. Otherwise, if you're doing something global with it (e.g. assigning to a global, passing to a task), pass by value should be default.
Chandler: Couple of problems with this. We know escaping local variables is dangerous, this (the case where we create a lambda locally) isn't a new problem. But in case of calling some function to return a future, don't really know what that function is going to do. If you pass down a local variable in there, know that that's going to be an issue.
Pablo: Concerned that things like exceptions and return could end up doing something that we don't see in other code. If a future gets created, and the work it's coming from takes a reference to a local variable, it all looks nice until someone throws an exception, and what was a structured function becomes unstructured. [...]
Had to cut off discussion at this point due to time constraints.
Hans: Does anyone want a meeting before Portland?
No real interest.
Clark: Do we want to have a mailing with N-numbered documents from Wiki?
Stefanus: Had interest to see documents from twitter.
Herb: N-numbered papers have ISO restrictions, making wiki public may be too broad. Can we have a middle solution to simply have a place to put all the files that people are OK to provide publicly?
Clark: OK, will do that.
Hans: Thanks the host.