Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

device-assisted experiment scheduling #432

Open
sbourdeauducq opened this issue May 10, 2016 · 17 comments
Open

device-assisted experiment scheduling #432

sbourdeauducq opened this issue May 10, 2016 · 17 comments

Comments

@sbourdeauducq
Copy link
Member

No description provided.

@dhslichter
Copy link
Contributor

Description?

@sbourdeauducq
Copy link
Member Author

sbourdeauducq commented May 12, 2016

The basic idea is to preload the first kernel of the next experiment into the core device, and have the core device pre-switch to it without PC intervention.

This can reduce the dead time between experiments to microseconds, and helps in the case of experiments that continuously use RTIO inputs and need feedback. If there can be only pre-defined pulses between experiments, seamless handover (#425) and deep enough FIFOs are much easier to implement.

@sbourdeauducq
Copy link
Member Author

Seems inter-experiment seamless handover will be enough. Closing for now.

@sbourdeauducq
Copy link
Member Author

sbourdeauducq commented Mar 18, 2017

According to @dhslichter , a feature like this is still wanted, but the desired dead time between experiments is ~10ms (not microseconds), so we can afford a round-trip with the PC and simplify the scheduler design (and the current scheduler could be kept as-is).

How about the following: experiments in the prepare stage open connections to the core device, compile and load a kernel into the core device memory, and when that kernel is actually called (in the run stage, and run() itself can be a kernel) then all that needs to be done is send a message to the device to start it.

@whitequark
Copy link
Contributor

The problem is, like with caching kernels, the memory model. Can arbitrary code execute between compilation and execution of the kernel?

@sbourdeauducq
Copy link
Member Author

No, we would need to restrict that. Maybe even enforce it so that the user cannot modify a Python object that has been included in a kernel compilation until the kernel is run, but I'm not sure if that can be done without horrible/unreliable hacks and/or slowness.

@whitequark
Copy link
Contributor

whitequark commented Mar 18, 2017

Surprisingly this is a supported use case. We can override __setattr__, temporarily.

@sbourdeauducq
Copy link
Member Author

And note that, in this case, the currently executing kernel and the next are from separate processes.

@whitequark
Copy link
Contributor

In that case there's nothing to be done except making the second kernel execution wait until the session drops instead of interrupting, no?

@sbourdeauducq
Copy link
Member Author

Yes, we can override setattr, but this won't catch all modifications, e.g. object.__setattr__(target, key, value). But it could be good enough.

@whitequark
Copy link
Contributor

Yes, we can override setattr, but this won't catch all modifications, e.g. object.setattr(target, key, value). But it could be good enough.

No, github mangled my comment. I was suggesting overriding __setattr__ not setattr.

@sbourdeauducq
Copy link
Member Author

sbourdeauducq commented Mar 18, 2017

We don't know the scheduler's decision nor the host control flow of the next experiment, so the start of the next kernel should be when it is actually called, not when the previous session ends. And there could be multiple kernels preloaded, from different experiments or from the same. So this becomes essentially a kernel caching mechanism. Something like:

def prepare(self):
    self.core.preload_kernel(self.k1)
    self.core.preload_kernel(self.k2)

# if session drops, both k1 and k2 are automatically unloaded by the device

@kernel
def k1(self)
    ...

@kernel
def k2(self)
    ...

def run(self):
    if foo:
        self.k2() 
        # k2 is automatically unloaded after exec
        # any objects it uses can be modified by the host again
        self.core.unload_kernel(self.k1)
    else:
        self.k1()
        self.core.unload_kernel(self.k2)
    # do other stuff, all objects are unlocked now 
    ... 

@whitequark
Copy link
Contributor

Does this imply multiple concurrent session connections?

@sbourdeauducq
Copy link
Member Author

Yes.

@r-srinivas
Copy link

What would happen if the previous experiment modifies a dataset that's called in the prepare of the subsequent experiment? That would mean it would get preloaded onto the kernel before that dataset was modified, right?

@jordens
Copy link
Member

jordens commented Mar 20, 2017

You mean "used in prepare() of the subsequent experiment"?
You need a barrier there. Already now we are pipelining prepare(), run(), and analyze() in that way.

@sbourdeauducq
Copy link
Member Author

You could obtain such datasets via RPC from the kernel, which is faster than kernel compilation and loading.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

5 participants