Some Philosophical Paradigms in Education of Modeling and Control

The paper discusses some interesting, mainly philosophical paradigms of the modeling and control areas, which are still partly unsolved and/or only partially studied. First the possible introduction of a prejudice free control similar to the term for the modeling introduced by Rudi Kalman – is investigated. Next the real constraints in real control systems are discussed. It seems that these are the amplitude limit for the actuators in practical systems. Then the application of the HEISENBERG-type uncertainty relationship in control is discussed, combining the robustness of control and quality of modeling. Finally a special irregularity in the classical LQR control methodology is treated investing some unreachable poles in the anomaly. The paper constitutes a review of a few specific problems in the control theory. They are enumerated in keywords and discussed on the basis of the references. The authors aim at least to invite further authors to continue such a discussion.


Introduction
Some scientists believe that everything has been solved in control, consequently nothing remained to study and/or research. The purpose of this paper is to recall some interesting philosophical paradigms in the areas of modeling and control to prove the contrary.
Only a few questions are discussed here, but there are many. Our aim is to encourage scientists to find further unsolved problems, blazes and interesting paradigms partly based on the modeling and control literature, partly on other disciplines.
If we can invite only a few further authors to continue our discussions then this effort is worth while.
We found that this is very good basis for education, too. The YOULA Parameterization The YOULA-(Y or ) -parameterization is a classical method for linear time invariant control system to characterize all realizable stabilizing regulators (ARS) by C = Q 1 − QP (1) for open-loop stable plant , where is the closed set of all stable proper real-rational systems, having all poles within the closed unit disc. The parameter" ranges over all proper ( Q ω = ∞ ( ) is finite), stable transfer functions [1], [5]. Observe that is the transfer function from the r to u in the closed-loop (see Fig. 1), where is the output disturbance (or noise) signal in a SISO (Single Input Single Output) system. The transfer characteristics of the closed-loop can be easily computed (3) where y t is the tracking (servo) and y d is the regulating (or disturbance rejection) independent behaviors of the closed-loop response, respectively.
Because the ARS regulator represented in Fig. 3 was formulated for an one-degree of freedom (1DF) control system, it is not surprising that the tracking part y t of the  prefilter shown in Fig. 2, when the ARS regulator really "virtually" opens the closed-loop as (4) Figure 3. The K-B-parameterized 2DF system with an ARS regulator An important and new observation of the authors was that the scheme in Fig. 2 is equivalent to the special control system given in Fig. 4 and its parameterization has been named as Keviczky-Bányász-(KB) parameterization [1], [2]. Since in the case of the special structure presented in Fig. 3 we have , i.e., (3) holds, it is easy to introduce a new general form of any 2DF control systems providing (5) if a serial compensator Q r is applied additionally as the Fig. 5 shows. Here and in the sequel the general notation y r will be used for the reference signal for general 2DF systems. Equation (5) shows that the tracking properties can independently be designed from the regulating behavior by Q r . The last scheme was later named as a generic two-degree of freedom (G2DF) system [1], [2]. The K-B parameterization for closed-loop control is not so widely known as the Youla-Kucera-(Y-K) parameterization [4] however, it is much closer to a control engineering view and its most important advantage in 2DF systems is that it virtually opens the closed-loop. However, this parameterization can only be applied for open-loop stable processes. Figure 5. The generic 2DF (G2DF) control system A G2DF control system is shown in Fig. 5, where y r , u, y and are the reference, process input, output and disturbance signals, respectively. The optimal discrete-time ARS regulator of the G2DF scheme [1], [2] is given by where (7) is the associated Y-parameter [1], furthermore ; (8) assuming that the continuous-time process is factorable as and a discrete-time process is factorable as where P + , means the inverse stable (IS) and , the inverse unstable (IU) factors, respectively. Here is the continuous time delay and z − d corresponds to the discrete time delay, which is the integer multiple of the sampling time .
It was shown [1], [2] that the optimization of the G2DF scheme can be performed in H 2 and H ∞ norm spaces by the proper selection of the serial and embedded filters (compensators). These optimizations are reduced to the optimal computation of the G r and embedded filters. If G r and are optimally selected, then denotes the optimal ARS regulator in (6). It is interesting to see how the transfer characteristics of this system look like: or u y Here and are stable and proper transfer functions, that are partly capable to place desired poles in the servo and the regulatory transfer functions, furthermore they are usually referred as reference signal and output disturbance predictors. They can even be called as reference models, so reasonably and are selected. In this case the obtained regulator is always an integrating one.

Prejudice Free Control
The knowledge of a process is never exact, independent of the method how its model is determined, whether measurement-based identification (ID) or physico-chemical theoretical considerations are used. The uncertainty of the plant can be expressed by the absolute model error P P P ∆ = − (13) and the relative model error where P is the available nominal model used for regulator design and P is the real plant.
The parameters of the plant may change in terms of their nominal values in a given range. The closed-loop control system needs to be stable under the given uncertainty ranges of the parameters.
Suppose that the open loop is stable. The regulator designed for the nominal plant ensures the stability of the nominal closed-loop control system. Let us analyze  Robust stability means that the closed-loop control system should not display unstable behavior even in the "worst case" parameter changes. The bound for ∆L can be formulated based in Fig. 6 by taking the simple geometrical considerations into account: the NYQUIST diagram will not encircle the −1 + 0 j point, if the following relationship is satisfied for all frequencies: With further straightforward manipulations the necessary and sufficient condition for robust stability is obtained as

( )
T jω is high, very accurate information is needed to reach a small error. The higher the absolute value of the complementary sensitivity function, the smaller the permissible parameter uncertainty.
The robust stability condition (4), (5) and (6) can be rearrange in the form of where the ( ) η  function is plotted in Fig. 7. The interpretation of this function is very interesting. For small   modeling error a model based controller design is suggested, which usually based on the inverse of the nominal model 1 P − . For very large errors no regulator design is advised. However, in the midrange domain, where the error is around 100 %, the regulator design practically does not depend on our knowledge of the process. This area can be called "prejudice free" domain The condition of robust stability for the YP control loops can be further simplified so the expression (18) becomes d 1 n n n n n n n n 1ŝ TˆˆˆQ P R G P P R G P êR where d T is the dead time of the model and The inequality (17), limiting the relative error, is now If the process is IS, i.e., 1 P − = , then G n = 1 can be chosen and the condition of robust stability can be further simplified as  l jω i.e., it does not depend on the model P but only on the reference model or the design goal.
The reference model is an important parameter of the general YOULA design, by means of which the condition of robust stability (22) can be guaranteed. Let then the constraining condition of the right side of (22) can be seen in Fig. 8. Given the latter condition and choosing first-order reference model R n , we see that robust stability can be ensured even in the case of 100 % relative model error. Furthermore for the high frequency domain a real prejudice free case is obtained, If the process is IU, even the factor nĜ P − appears in (21), can significantly modify (22). Fig. 9 shows the case when two unstable zeros seriously decrease the prejudice free character of the stability. The worst case is when this factor has a large value in the region of the cut-off frequency. prejudice free identification/modeling methodology [6]. He could not find any general applicable results, however many interesting, almost philosophical statements were developed.

Process Behaviors Constraining the Control
It was always an important question in the control theory papers what are the real constraints strongly effecting the result of the controller design.
We will discuss here the invariant process factors and the actuator operating signal domain here below. These factors are independent of the designer and of the available methods being either theoretically or experimentally founded.
One class of the constraints, which appear in the controller design is that the invariant factors of the real process, i.e., the unstable zeros and the time delay can not be eliminated with any control algorithm. So the best reachable closed-loop performance partly depends only on the process itself. If someone wants to change these elements only the redesign and rebuilt of the technology helps.
The ideal design goals formulated by the reference models R r and R n can be reached only for stable and inverse stable delay free processes. In case of inverse unstable processes only the approximate R r G r P − e −sT d and R n G n P − e −sT d goals can be reached [1]. For discrete time The influence of the unstable zeros can be somewhat attenuated using the G r and G n embedded filters. So only in case of stable process zeros we can obtain optimal controller independent of invariant process factor(s).
So before designing a control systems it is better to clarify what the process will allow us to reach and the final result will not depend on our skill or on the applied theoretical approach and/or methodology.

Actuator Behaviors Constraining the Control
There is another class of constraints which is independent on the applied regulator design and/or control systems applied. These are the always existing amplitude limits in the real actuators. Sometimes the theoretical control people forget about these constraints, because in their platforms, which is almost always computer technology, their life is limited only by the very big floating point number representations available at different software environments. Unfortunately the real actuators do not know these internal representation in spite of the fact that the modern control algorithms are based on discrete time computer control nowadays. The amplitude limit for a real actuator is always finite and we must not forget this reality. If we want to speed up a slow process by modern control, the result does not depends on the theoretical strength of the applied algorithm, it depend primarily on the available amplitude limit which can be applied at the output of the actuator.
The regulators always invoke zeros to accelerate the process. To demonstrate this let us investigate the case shown in Fig. 10, where a phase-lead element is connected in serial before the first-order lag element. In the first moment, a signal value of 10 appears at the output of the phase-lead element and at the input of the first-order lag element for the effect of a unit step signal. The first-order lag starts with a high gradient in order to reach this value as soon as possible with its time constant and by the time the input signal is settling down the output has almost reached its steady-state value. The cost of the acceleration is the so-called over-excitation, i.e., the ratio of the initial and final signal values at the input of the lag. The acceleration can be reached only by over-excitation greater than one. In many cases, it is worth applying pole cancellation, when zeros are invoked to cancel the undesirable poles which cause slow operation, and instead a pole ensuring more favorable behavior is inserted. Thus it is obvious that the over-excitation means the control equipments will have an initial peak at their output as a response to unit step commands or disturbances. The problem is that the output of the regulators in the closed-loop control, or the output of the actuators gaining the signal for the proper level are always amplitude-restricted.
The sensitivity function of the real closed-loop can be written in the following decomposed form: The above triple decomposition of the sensitivity functions gives a good insight into the limit-optimality (limits of the optimality) of closed-loop control systems, i.e., the characterization of the best control achievable. As regards this distinction optimality criteria need to be created for each term, i.e., both for the tracking and disturbance rejection behaviors.
Here the notation   is used to express the optimality criterion. In strict mathematical analysis this notation is used to refer to the chosen norm of the transfer function. It is not an easy task to optimize all three terms simultaneously. In practice iterative techniques are used, whereby the optimization problem is solved step-by-step. The optimization of the first term in the decomposition of the sensitivity function (25) means the determination of the best (fastest) reachable reference models R r = R r opt and R n = R n opt , i.e., the solution of the task under the constraints ( ) r r opt r r des r arg min arg min 1 where the chosen criteria J des r = 1 − R r and J des n = 1 − R n state that each reference model should approach the ideal unity. This task must be solved under the constraints for the regulator output u ∈U . Here U means the allowable region for u , i.e., U : u ≤ 1 (see (24)).

Redesign of the Reference Model
The optimization task (27) is very difficult because the solution is always on the border of the limited region. There is no analytical solution except for some low-order simple cases. The optimal reference models are usually determined by simulation (CAD tools). Note that under the given constraints faster reference models cannot be used to solve the task (27). Vice versa, if under the constraints and design goal there is no solution for the reference models then the only option is to prescribe a less demanding (usually slower) model. Thus the best (fastest) reachable response of the closed system basically depends on the constraints of the control output. Of course, equation (27) contains the applied regulator and also the process in a complex way; thus it is a closed-loop. Therefore the optimality of the regulator depends on the process, the model and the invariant factors.
Even in the case of a very careful design it can happen that the over-actuated output of the obtained regulator is beyond the acceptable signal domain. Then the original design goals need to be reduced. The advantage of the KB parameterization of generic 2DF control loops is that only the problematic (over-demanding) reference models R r and R n need to be changed to accommodate less demanding design conditions. Usually this can be performed only step by step via an iterative procedure. The steps can contain model simulation and also experiments on the real process. Therefore the optimization is often termed the redesign of the reference model. In the case of low-order reference models the time constant of the model (i.e., the bandwidth) can be determined by explicit design expression if the process model and the amplitude constraint U max are known.
Let the process be given by the pulse transfer function Apply a combined iterative identification and control test [12]. The following reference models of unity gain are used for the design At the start of the iteration the model is assumed. A square signal with periodic time of 40 samples is applied as reference signal. In the simulation it is assumed that the additive noise y n is white noise, whose variance is σ y n = 0.01 . The number of the processed samples is N = 100 . Because of the small output noise the identification is performed by a simple off-line LS method. The regulator is designed by the YP method, assuming an IU process.

2202
Some Philosophical Paradigms in Education of Modeling and Control Figure 11. Response of the YP regulator before the iteration The output of the regulator is presented in Fig. 11, where it is seen that the over-excitation is very high at 900 %, i.e., u t = 9 . Assume that the actuator can realize only u t = 5 .
This requires the redesign of the reference model R r . The condition u t ≤ u t = 5 needs to be prescribed for the reference model redesign iteration. It can be seen in Fig. 12 that the control output is according to the prescribed over actuation by the end of the iteration. The obtained redesigned reference model is (31) Figure 13 shows the time function of the output of the reference model (continuous line) and the closed system (dashed line). Fig. 14   The final conclusion is that the control quality can never be prejudice free, more exactly never process independent and never the installed actuator independent property of the final control system.

The HEISENBERG-Type Uncertainty of Control
The condition of robust stability (18) already contains a product inequality. Here

( )
T jω (although it is usually called a design factor) can be considered as the quality factor of the control. The other factor, however, can be considered as the relative correctness of the applied model. In the light of practical experience control engineers favor applying a mostly heuristic expression (quality of the control) × (robustness of the control) ≤ limit This product inequality can be simply demonstrated by the integral criteria of classical control engineering. Let I 2 be a square integral criterion (Integral Square of Error: ISE) whose optimum is I 2 * when the regulator is properly set, and the NYQUIST stability limit (i.e., robustness measure) is ρ m . The well-known empirical, heuristics formula is The inequality is illustrated in Fig. 15. The fact that the quality of the identification (which is the inverse of the model correctness) can have a certain relationship with the robustness of the control, is not very trivial. This can be observed only in a special case, namely in the identification technique based on KB parameterization The above three basic relationships can be summarized in the inequalities below The above results are not surprising. The fact, that they are valid even for the modeling error in the case of KB-parameterized identification methods makes them special. So it can be clearly seen that when the modeling error decreases, the robustness of the control increases. Namely, if the minimum of the modeling error M δ  is decreased, then the maximum of the minimum robustness measure m ρ  is increased, since M m 1 δ ρ =   . Similar relationships can be obtained if the H 2 norm of the "joint" modeling and control error is used instead of the absolute values. Introduce the following relative fidelity measure Using these definitions and the former equations we obtain the following interesting relationship for the relative quadratic identification error. .
The product of the modeling accuracy and the robustness measure of the control must not be greater than one, when the optimality condition 0 =  is reached. The obtained uncertainty relation can be written in another form, since 2 0 n 2 sup min 1 1 The earlier results of control engineering referred only for the statement that the quality of the control cannot be improved, only at the expense of the robustness, so this result, which connects the quality of the identification and the robustness of the control, can be considered novel.
This phenomenon can arguably be considered as the HEISENBERG-type uncertainty relationship of control engineering, according to which Here ∆z and ∆ p are the alterations of the canonical coordinate and the impulse variables, respectively, and thus their inverse corresponds to the generalized accuracy and "rigidity" which are known as performance and robustness in control engineering. The consequence of the new uncertainty relation is very simple: KB-parameterized identification is the only method where the improvement of the modeling error also increases the robustness of the control. With other methods, and other identification topology, modeling and control errors are interrelated in a very complex way, and in many cases this relation cannot be given in an explicit form. This is the main reason why it is difficult to elaborate a method which guarantees, or at least forces, similar behavior by the two errors, though some results can be found in [3]. There is a myth in the literature concerning the antagonistic conflict between control and identification. A "good" regulator minimizes the internal signal changes in the closed loop and therefore most of the identification methods, which use these inner signals provide worse modeling error, if the regulator is better. The exciting signal of KB-parameterized identification is an outer signal and therefore the phenomenon does not exist. The relevant feature of this relationship is shown in Figs. 17 and 18 for a general identification method and a KB-parameterized technique.
In Fig. 17, there is no clear relation between δ ID and δ , or σ ID and σ , and therefore there it is not guaranteed that minimizing δ M increases ρ m . In Fig. 18

Irregularities in Classical Control Methods
A further myth in the control literature is that everything is right and errorless in the classical works of theory. This is unfortunately (or fortunately ?) is not right. T. Keviczky recognized that the solution of the classical LQR paradigm does not provide full pole placement and some areas can be considered as irregularities of this classical theory. Keviczky and Bányász [9], [10] gave a detailed analysis proving that there are some poles, it is interesting that the slower poles, which can not be allocated by the classical methodology. In some earlier references it was also studied that some poles are unreachable in the LQR theory by Johnson [11]. Finally Bokor and Keviczky [12], [13] presented a possible method, which solves the irregularity of this paradigm.
The LQR (Linear system -Quadratic criterion -Regulator) problem This optimization paradigm was formulated by the general, quadratic criterion [7], [8] ( ) ( ) ( ) The SF to be applied is given by