You’re supposed to record the instrument into solid memory storage with a push, copy a phrase of wavetables through the device’s memory buffer to a new memory location using an electromagnetic mirror pulse of the recorded phrase onto the second location in the device with /denycopyfromothermusician at the initial point of the wave recording to create grooves, then lift the phrase outwards from a third track where the total corpus of the work in progress is stored in “reverse push mirror” on the third track, with a laser column that is a slightly different frequency and diameter built around the original one’s pressure column on the interial surface of the middle track to get the sound balanced in the proper wave thickness at the correct location on the disk (so the grooves blow out instead of sucking in).
You can label each edit starting from inception of track at 0.0.0.0.0.0.0 with each new ([[source…]\[[album…]\[song…]]]|[[musician…]/[instrument…]/[phrase…]/[datalocation…\]], in the memory register, and then add the new collected “data piece” at the correct point in the sequence with each added track along a circular flywheel with multiple bars in parallel, so that you can repress it into the corpus repeatedly and play it back from the proper memory location on the recording medium with independent, selectable spindles, while compensating and matching speeds along the devices because they’ve been idling (due to processing/reading/writing/memory transfer/bussing/temporary and permanent memory storage interfaces/track thickness).
I don’t know if software or hardware or spacetime is built for that, or if the software is built for a consumer model or an uploader/producer model. There are probably clock frequency timing issues due to interactions between the software and the os and the hardware, and the musician, if everyone is mic’ed on their own disparate hardware steups. Cross compatibility is key if uniformity isn’t.