We need more information regarding that specific subject.
From what i understand it should be able to apply an FX on one voice on the same track and do some summing mixing after to add other process on the way to the master buss. But i can’t figure out how to do it.
You actually have 4 FX ressources per Track. (BTW 6 FX/Track would be perfect !!!)
For a so called “Workstation” it should be possible (even usual) to use those ressources either in serial mode (chained FX) or (parallel one FX used for one specific voice/pulse) and/or a mix of both modes till Polypulse CPU can manage processings.
This is the very least kind of processings Polypulse should be able to do if you want to get a professional sounding and mixing “Workstation” and do it all in the box or DAWless.
Yeah, but it looks like CPU constrains what we can do with FX so that the more serious you want to get sculpting FX chains, the more you’re going to need to go outboard. Messing around with toy projects, I’ve been getting crackling when the CPU reaches around 90%. I was using a total of 9 effects: 3 on track 1, 4 on track 2 (drums), and 2 on track 5 for the other tracks’ send FX. I expect I could have been more efficient, but that’s only 9 out of a possible 20, and it was already past the limit.
Multitracking out to my monster multi-FX box will provide a great experience, I’m sure, but that takes the PolyPulse out of the running for convenient portability to outside venues. I also feel like going to the outboard FX leaves a lot of the PolyPulse’s capability unused. On the other hand, it looks like the onboard FX are designed only to cover basic needs, with the PolyPulse’s real personality being the way you manage note lists, patterns, and touchpad morphing.
Manipulating the onboard FX in real time is a bit clumsy when you’re also handling that main task of note lists and patterns. If FX are basically set-it-and-forget-it, I might as well set-and-forget the outboard FX, as the PolyPulse’s FX interface would be offering me no real-time playing advantage, and their quality being less than top shelf because they’re not meant to be the PolyPulse’s focus.
It’s just hard to say at this point how it’s going to because I’m still feeling my way around the possibilities. I wonder what @ward has to suggest regarding best practices managing resources and his approach to performing live in efficient ways where you don’t lose your place or accidentally create bad audio by maxing out the CPU.
Thanks for the clarification. I’m not planning on adding parallel placement of FX anytime soon.
There are some ways to get ‘parallel processing’ type effects:
because the modulator system is poly/multichannel you can apply a certain effect with different amounts to the multiple channels running through an audio effect. For instance to get a different delay time for all channels add the offset modulator to time.
some effects have a dry/wet, most notably in this case is the distortion where you can blend between clean and distorted sound. Can be nice for bass sounds to get more grit (with the distorted wet) and keep the low bass (with the clean dry).
you can also do multiband processing. Start with the multiband that filters the incoming signal into multiple frequency bands. Then add 1-2 FX (a limiter with very high gain can be fun for drums) and after that add the stereo to mix the frequency bands back to a stereo signal.
The a maximum of 4 FX per track is more an interface design choice. Where would the parameters of the extra two audio effects go? If audio effects would also be under the X encoder like the modulators and envelopes, then we would have only 3 encoders per effect instead of 4 which would greatly limit the flexibility of each FX device.
So on the topic of CPU and limitations. I feel like there are two approaches to this in the design of hardware instruments:
set limitations in such a way that the CPU can never be overloaded by the user. This seems to be the common choice for hardware music tech designers.
try not to impose too much limitations and let the user be responsible for preventing CPU overload. This is the approach often taken with computer software / DAW’s.
I’ve chosen for approach 2 as I felt I would have make some very stupid limitations if I would choose, and I think setting good universal limitations would be hard. If I would say: only 16 voices for the whole machine + 2 FX per track that would make people who use a lot of effects unhappy. If I however say only 8 voices in total + 4 FX per track it would make people who want to do big chords unhappy.
Some tips for preventing CPU overloads:
In my experience the two biggest factors for CPU usage are:
how many voices are playing (you can see a number of currently playing voices on the screen just left below the CPU %)
how many FX are added to tracks
So some sound engines are a bit heavier in CPU usage than others. It seems the quad engine and granular seem to use less CPU than other engines, while the resonator is a bit more CPU hungry.
As for audio effects in my experience it seems reverb uses more CPU than the other effects. I usually want delay, reverb or a fade delay on multiple tracks, so instead I put them on track 5 and then use the send knob to add delay to multiple sounds. This can make quite a bit of difference compared to a delay on each track.
For me the magic of the PolyPulse’s audio effects is the fact that their parameters are stored in the morph presets and can be controlled with the touchpads, and that the multichannel modulation system allows for some cool stuff with unique LFO’s and randomized parameter values.
When I perform live with the PolyPulse I tend to mostly be switching around between patterns / note lists, morphing with the touchpads and some mixing with the volume and send controls. I’m usually not adding/removing effects although sometimes I’m changing some extra parameters by hand.
Yes, that’s a big and cool deal. The catch for me is having the identical FX on each track, varied only by each track’s send amount. I’m spoiled by my Axe-Fx-III, which provides five independent FX chains, four with stereo input, one with mono. The Axe-Fx seems an ideal partner to the PolyPulse where you put the quad engine drums on the mono input, then have four stereo FX chains. I’d minimize the number of PolyPulse FX to keep its CPU down while making sure to put all reverbs on the outboard FX box.
I’d lose some of the benefits of morphing FX, but you’re saying the voice count has more impact on the CPU than the number of FX? I’ll have to see how I like an outboard FX configuration. This setup would provide some great sounds, but I need one more stereo into the Axe-Fx for the keyboard that I use to jam with the PolyPulse. That keyboard absolutely requires outboard FX, and a solo instrument + PolyPulse combo is the main thing I want to do. I’ve thought I could route the external synth through one of the PolyPulse’s tracks, and then it would share all FX, onboard and outboard, which would be good enough for me. But I haven’t tried the PolyPulse’s routing and line in capabilities yet. I wonder what you think of this kind of arrangement?