Part 5: Implementation of ILC – Low cost real-time closed-loop control of a consumer printer

Introductory post: link
Previous post: link

In the previous post, I designed a feedback controller and obtained a model of the printer. With that, all the ingredients are in place to implement Iterative Learning Control (ILC).

ILC is a learning method to learn a feedforward signal to perform a certain task iteratively. It requires a model of the system, which need not be perfect, and it requires the task (reference) to be the same every time. Moreover, there exist elegant conditions for monotonic convergence that make the design procedure relatively straightforward. If you’re unfamiliar with ILC, I recommend starting with, for example, this 2-page summary before reading on. A lot more references, e.g., for dealing with task flexibility or describing lifted ILC, are included here. In this post, I will apply frequency-domain ILC.

Goal

The aim is to track the following third-order reference with the least positioning error possible:

Since the application is a printer, the constant velocity part is deemed most important.

Learning filter design

The ideal learning filter L(z) is the inverse of the process sensitivity (in this post, L(z) does not represent the open-loop!). Since the process sensitivity is causal, i.e., the discrete-time system has more poles than zeros, the ideal learning filter is non-causal. Consequently, the feedforward signal at time t depends on the error at some time t+a in the future. Moreover, if PS(z) has non-minimum phase zeros, which my model does, 1/PS(z) has poles outside the unit disc.

To simulate this, I use stable inversion, in which the unstable part of L(z) is split from the stable part and simulated backward in time – a ball rolling off the peak of a hill is an unstable response, but not if you play it backward in time! There’s more info on this in the links shown before.

Without a robustness filter, ILC would converge monotonically if |1-GS(z) L(z)|<1 for all frequencies. Let’s evaluate:

|1-GS L| for both the FRF and the model. Even if the FRF were perfect (which it isn’t), with this parametric model of G we would require a robustness filter Q, because the plot exceeds 0 dB.

Right now, the condition for monotonic convergence is not met. We need a robustness filter Q for that.

Robustness filter design

With a robustness filter Q, the criterium for monotonic convergence becomes |Q’ Q (1-GSL)|<1 for all frequencies, where ‘ denotes the conjugate transpose. With Q a first-order lowpass filter with a cutoff frequency of 80 Hz, we get:

|Q’ Q (1-GSL)|. Since 0 dB is not exceeded, we would have monotonic convergence if the FRF were perfect.

The filter is quite conservative, but since the FRF is imperfect, this gives us some margin for error. The price paid for this is that the final error of ILC, after convergence, will be higher: we don’t learn nearly as much at frequencies above 80 Hz now. Still, as shown below, this is enough to achieve some nice results.

Results

Let’s start with the results before ILC. Using the feedback controller of the previous post and a feedforward signal f=0.65 U sign(v), where U is the maximum voltage and v is the velocity of the reference profile, we get:

Results without ILC. With an encoder resolution of about 0.035 mm/count, 50 counts corresponds to 1.7 mm.

When implementing these L and Q filters, using a learning rate of α=0.3 (see the references mentioned, this is for attenuation of trial-variant disturbances), this is what happens:

Error over iterations with ILC. Only every 4th trial is plotted for clarity.

We have nice convergence:

2-norm of the error over iterations.

At the final trial, the performance is much better than what we started with, a factor 8 smaller in the 2-norm and a factor 3.6 smaller in the infinity-norm:

Performance after 20 trials of ILC. During constant velocity, the peak error is 3 counts: only 0.11 mm.

I’m very happy with these results; it really shows the strength of ILC: with a simple feedback controller and a low-quality model, the performance is improved significantly in 20 trials. And that on a ~€6 microcontroller!

Recommendations

What if you wanted to decrease the error even further?

Analyzing the amplitude spectrum of the error profile after 20 trials of ILC, ignoring the samples where the reference is zero, we have:

Amplitude spectrum of the error after ILC, ignoring the samples where the reference is zero.

The oscillating behavior in the error is clearly seen as a peak in the spectrum at 19 Hz. Hence, a good place to start could be to either (i) increase the model quality at 19 Hz, such that that the learning filter better represents the real system at this frequency and learns to reduce the error here, or (ii) reduce this error with feedback, using an inverse notch filter at 19 Hz. Either way, I’m happy with the results as they are now, but there’s plenty of steps forwards.

Conclusion

Do you need a setup worth thousands of euros to play around with learning control techniques? As it turns out, a €6 microcontroller is enough. While I have access to Matlab, everything may as well have been done using Python or any other language since the communication to and from the microcontroller is all done using the standard Serial protocol.

I plan to make one more post in this series explaining what I learned about the practical side of control during this project. After that, I’m applying this real-time framework to other projects, so if you don’t want to miss that, subscribe to e-mail notifications in the sidebar!

Author: Max

I'm a Dutch PhD candidate at the Control Systems Technology group of TU/e.

Leave a Reply