If your goal is to emulate the behavior and sound of a bucket brigade delay (BBD) then some background on how BBDs worked may be helpful. BBDs made use of a then new semiconductor device, the CCD (charge coupled device). One way to think about CCDs is that they were a very long string of analog sample and holds (S&H). The S&H triggers were all clocked by the same signal; think of it as a sample clock. When a sample trigger occurred one S&H captured the voltage held by the previous S&H. So samples where passed from one S&H to the next at a rate determined by the clock. This is analogous to the old way water was brought to fires, the bucket brigade; hence the name.
The number of S&Hs in the CCD is fixed so the delay time through the CCD is constant for a given sample clock frequency. To change the delay time the sample clock needs to change; a slower clock means a longer delay. The BBD uses "sampled analog" and is thus subject to aliasing.
CCDs also had another characteristic that affected the sound of a bucket brigade delay: clock noise. The clock used to trigger the S&H-like chain would "leak" into the signal. When the clock rate was reduced low enough (for long delay times) this "clock leakage" could be audible. CCDs were notoriously noisy and had modest signal-to-noise ratios of around 60 dB, comparable to a 10-bit digital system.
Each stage of a CCD was a very simple circuit of a semiconductor switch and one capacitor. The switch was activated to transfer the charge from one stage to the next. However, if all of the switches were on simultaneously then the charge would ripple down the chain; not the desired behavior! Instead, the circuit was designed to work in two phases. Odd stages were switched on one phase, even on the other.
So the delay through the CCD is actually 1/2 what you would have expected for a given number of stages. The delay-time = Number-of-stages / (2 * clock-freq). Or clock-freq = Number-of-stages / (2 * delay-time).
CCDs used for choruses, phasers, flangers, etc. were usually 512 or 1024 stages long. 10 msec or less delay is typical. So the clock would operate at 50 kHz and higher (500 kHz for 1 msec delay).
Providing echo like delay effects required longer CCDs with a much slower sample clock. A 4096 stage device was considered large (and expensive). A 250 msec delay using a 4096 stage CCD requires a clock of only 8 kHz!
Summarizing, the typical CCD used for a BBD has a variable frequency sampling clock to adjust or modulate delay times. Sampling rates could be as low as 8 kHz for a typical, modest duration echo effect, or 50 kHz to 500 kHz when used for chorus/flanger/phaser effects. The signal-to-noise performance was about 60 dB, equal to that of a 10-bit digital system.
A more accurate emulation also has to account for circuit-induced distortions. Harmonic distortions around 1% were common, and could be much more when large signal levels were used. These, and the previously mentiond noise and aliasing behavior, all contribute to the circuit's "color".
To help counteract the noise it was common to perform analog compression on the signal before it entered the CCD, and then reverse the process with analog expansion after the reconstruction filter. 2:1 compression/expansion ratios are a good starting point.
A typical bucket brigade delay would looks something like this:
input -> compressor -> anti-alias lowpass -> CCD -> reconstruction lowpass -> expander
Various circuit designs were used for these stages. Lower cost delays, like found in guitar pedals, might uses less expensive techniques then an expensive delay used in broadcasting. Chorus and other modulation effects often left out the compression/expnsion.
Replicating this type of circuit in Kyma must start with an approach the emulates the BBD's variable sampling rate operation. This is pretty challenging since Kyma is a fixed sample rate system. I do not believe that there is a standard Kyma Sound prototype that provides continuously variable up or down re-sampling, and operates at audio rates.
I would suggest using the DelayWithFeedback, with interpolation enabled, to approximate the BBD's variable sample rate behavior. This will not replicate the aliasing but should provide similar pitch change effects.
A MemoryWriter paired with a Sample is probably not going to work "better" than a DelayWithFeedback, and you'd likely just end up recreating the DelayWithFeedback. The MemoryWriter/Sample pair would have to be set up so that the Sample never plays back at slower rate than the sample rate, otherwise the MemoryWriter will "wrap around"; an interesting effect but not what you are looking for. So the MemoryWriter's length determines the maximum delay, and Sample would always play back at a higher than root pitch.
You might get closer combining the MemoryWriter with a SampleWithTimeIndex, plus a clever index generator. The strategy would be to have the index proceed slower and skip samples to mimic a lower sampling rate. In this case you'd set the MemoryWriter's length to determine the minimum delay time. The index can only skip whole samples so you'd need to interpolate to achieve the effect of skipping non-integer numbers of samples. Such a device is similar to what a Sample does except instead of calculating the index at the sample rate it is calculating the index at a variable rate.
The question is whether these MemoryWriter approach really improve the emulation, or if indeed you even want the aliasing artifacts.
Staying with the DelayWithFeedback approach and focusing on the other elements in the model, like the compression/expansion, distortion, noise, and filter probably provides the best "bang for the buck".
One word about the filters. The filter behavior is one of the most important parts of the emulation. These were common second or third order analog lowpass filter circuits. Fixed delay times always used fixed frequency filters with cutoffs somewhere in the 1/3 to 1/2 sampling rate. Variable filters like VCFs that could track the sample rate were not commonly used except for perhaps in high-end delay/echo applications. So a fixed filter is probably the best approach. The cutoff frequency is a tradeoff between aliasing and signal dullness. The choice in a particular BBD device was one of "design taste". Short modulate delays almost certainly always used a fixed filter. A variable filter would be simple to implement in Kyma and maybe worth experimenting but its probably not an accurate BBD emulation.
There is also another gentle slope lowpass effect due to how the CCDs work. It can be approximate with a gradual reduction of gain approaching around 6 dB as frequency approaches 1/2 the sampling rate.
I would also suggest thinking of the BBD as a new Kyma Sound prototype. Do not build the feedback loop into this. Instead use the new BBD Sound in a new patch that uses the FeedbackLoopInput and FeedbackLoopOutput prototypes to add feedback and possibly other embelishments.