

Forschung > DFGProjekt Kompetenzdiagnostik > MicroDYN Approach MicroDYN ApproachThe MicroDYN approachSome years ago, Greiff and Funke (2009) proposed the MicroDYN approach (see also Funke & Greiff, 2017; Greiff, Wüstenberg, & Funke, 2012; Wüstenberg, Greiff & Funke, 2012). This approach was developed under a psychometric perspective for the use in largescale assessments (like PISA). Actually, it consists of a set of “items” which each consist of a small system of causal relations that have to be explored within 35 minutes and afterwards controlled for given goal states. The main feature of MicroDYN seems to us the search for minimal complex systems, that is, systems which at the same time contain all (or at least most) of the features of a complex system (complexity, dynamics, polytely, intransparency; see Funke, 1991) and do have lower values on these parameters. The MicroDYN approach uses the formalism of linear structural equations to model systems with continuous variables. Here we argue not only to use minimal complex systems but also to extend the idea from systems with continuous variables to systems with discrete variables. Funke (2001) showed that both formal systems  linear structural equations and finite state automata  are ideal instruments for problem solving research. Using minimal complex systems from the framework of finite state automata seems therefore a natural extension from the MicroDYN approach and will be dubbed here the MicroFIN approach (which actually is the same approach as proposed much earlier by Buchner & Funke, 1992). But before we go into the details of MicroDYN and MicroFIN, we will explain shortly the guiding philosophy behind our approach.
The philosophy behind minimal complex systemsOne of the key concepts for our approach is the concept of "minimal complex systems" (MCS). Starting point of this concept is the idea that complex systems are needed in problem solving research because their features differ markedly from simple systems (in terms of complexity, connectivity, dynamics, intransparency, and polytely) and are not simply an addition of simple processes (Funke, 2010). In the initial research phase, it was thought that complex systems should realize as much as possible from the mentioned set of features. So, for example, the famous microworld "Lohhausen" (Dörner, 1980) contained nearly 2.000 variables, other less famous ones are said to incorporate even 25.000 variables. But what happened to these systems? Despite of their potential to realize really complex systems they produced a lot of problems for the researchers themselves (Funke & Frensch, 2007): how to evaluate participants’ actions and decisions (what is the optimal intervention?); how to separate effects from individual actions from those which were inherent in the dynamics of the microworld (one single intervention into a system with 50.000 connected variables can have 50.000 direct consequences and much more indirect effects in the next cycles); how to construct a test with more than one independent item; how to produce multiple items without spending too much time for assessment. The conception of MCS addresses these unsolved research questions with a simple strategy: instead of realizing more and more complex systems (trying to reach for the top complexity), MCS looks to the bottom and asks for the minimum of the complexity scale. Complexity is a very unclear term  we do not know what will be the most complex system on earth (the human brain?) or even in the universe. So, the upper limit of complexity is still open. But, for good reason, the lower limit of complexity must be somewhere between nothing and a little bit of complexity. Instead of searching for the upper bounds of complexity, we concentrate on the lower bound. This shift in the focus  the MCS perspective  has some advantages for test developers that can be characterized by the following points: (1) the time spent for a single scenario is not measured in hours but in minutes; (2) due to the short time for item application, a series of items can be presented instead of oneitemtesting; (3) because of our use of formalisms, arbitrary semantic embeddings could be chosen; and, (4) a broad range of difficulty levels can be addressed.
Basic Elements within MicroDYNIn MicroDYN, the basic elements are the number of exo and endogenous variables and the type of relation between them. This relation can be qualified by its strength (weight of the relation) and direction (positive or negative). Also, the variables can be labelled, either abstractly as “A”, “B”, and “C”, or with semantic meaningful label like “water”, “wind”, and “energy”. Figure 1 illustrates a simple system with two exo and two endogenous variables.  Insert Figure 1 about here  Formally, the system shown in Figure 1 can be described by two linear equations: Yt+1 = 2 • At (1) Zt+1 = 3 • At  2 • Bt (2) The subscript of each variable indicates the point in time (t respective t+1), the system itself being event driven in discrete steps. From Equation (1) it follows that the value of Y at time t+1 is equal to the value of variable A at time t, multiplied by 2. Equation (2) follows the same logic of computation. The graphical depiction in Figure 1 and the two equations (1) and (2) are equivalent in terms of the underlying causal structure but the diagram is more convenient for a human subject. Exploration in the MicroDYN environment requires a careful analysis of the intervention effects: The increase of an exogenous variable leads either to an increase in one or more endogenous variables, to a decrease, to a mixed effect (increase and decrease at different variables), or to a zero effect. By carefully designing the sequence of interventions, a meaningful pattern of outputs can be generated and hypotheses about causal effects can be formulated (see Sloman, 2005). To be precise: it is not only the existence of a causal effect that has to be established but at the same time a decision about the direction of this causality (positive or negative) has to be made and  if possible  quantitatively expressed. So, actually the task for the causal explorer is threefold: decisions about the existence, the direction, and the strength of a causal relationship have to be made. In a 2(exo)x3(endo) system, there are 2x3 decisions about effects from the exo to the endogenous variables, 3x2x1 decisions about endogenous effects, and 3 decisions about eigendynamics of the endogenous variables.
Basic Elements within MicroFINIn MicroFIN, the basic elements are input signals, output signals, states, and the state transitions between them. In contrast to MicroDYN, numbers and quantitative values do not play a role in MicroFIN. Figure 2 illustrates a simple system (a finite state automaton) with two input signals and three states (Thimbleby, 2007).  Insert Figure 2 and Table 1 about here  Formally, the diagram from Figure 2 is equivalent to the state transition matrix from Table 1. Once again, the graphical representation seems more convenient for human subjects. Exploration in a MicroFIN environment requires a stepbystep analysis of the state transitions. In principle, there are at least as much exploration steps required as there are different state transitions; in practice, this number can be reduced by detecting hierarchies and analog functions within the automaton.
Common ElementsBesides their specific attributes, MicroDYN and MicroFIN have some common elements with respect to the task for the subject and to the potential diagnostic procedures. Concerning the task for the subjects, both paradigms require the identification of unknown dynamic devices and the control of these devices. Concerning the potential diagnostic procedures, strategies for knowledge acquisition, the amount and the quality of acquired knowledge, and the degree of goaldirected control of the devices can be differentiated.
ReferencesBuchner, A., & Funke, J. (1993). Finitestate automata: Dynamic task environments in problemsolving research. Quarterly Journal of Experimental Psychology, 46(1), 83118. doi: 10.1080/14640749308401068
Danner, D., Hagemann, D., >Schankin, A., Hager, M., & Funke, J. (2011). Beyond IQ. A latent state trait analysis of general intelligence, dynamic decision making, and implicit learning. Intelligence, 39, 323334.
Dörner, D. (1980). On the difficulty people have in dealing with complexity. Simulation & Games, 11, 87106.
Fischer, A., Greiff, S., & Funke, J. (2012). The process of solving complex problems. Journal of Problem Solving, 4(1), 1942.
(8.4 MB)
Funke, J. (2001). Dynamic systems as tools for analysing human judgement. Thinking and Reasoning, 7, 6989.
Funke, J. (2012). Complex problem solving. In N. M. Seel (Ed.), Encyclopedia of the sciences of learning (pp. 682685). Heidelberg: Springer.
Funke, J., & Frensch, P. A. (2007). Complex problem solving: The European perspective  10 years after. In D. H. Jonassen (Ed.), Learning to solve complex scientific problems (pp. 2547). New York: Lawrence Erlbaum.
Funke, J., & Greiff, S. (2017). Dynamic problem solving: Multipleitem testing based on minimally complex systems. In D. Leutner, J. Fleischer, J. Grünkorn, & E. Klieme (Eds.), Competence assessment in education. Research, models and instruments (pp. 427–443). Heidelberg: Springer.
Greiff, S., & Fischer, A. (2013). Measuring complex problem solving: An educational application of psychological theories. Journal for Educational Research Online, 5(1), 38–58.
Greiff, S., & Funke, J. (2008). What makes a problem complex? Factors determining difficulty in dynamic situations and implications for diagnosing complex problem solving competence. In J. Zumbach, N. Schwartz, T. Seufert & L. Kester (Eds.), Beyond knowledge: The legacy of competence. Meaningful computerbased learning environments (pp. 199200). Heidelberg: Springer.
Greiff, S., & Funke, J. (2010). Systematische Erforschung komplexer Problemlösefähigkeit anhand minimal komplexer Systeme. Zeitschrift für Pädagogische Psychologie, 56 (Beiheft), 216227. Download: http://www.pedocs.de/volltexte/2010/3430/pdf/Greiff_Funke_Projekt_Dynamisches_Problemloesen_D_A.pdf
Greiff, S., Wüstenberg, S. & Funke, J. (2012). Dynamic Problem Solving: A new measurement perspective. Applied Psychological Measurement, 36(3), 189213.
Greiff, S., Wüstenberg, S., Holt, D. V., Goldhammer, F., & Funke, J. (2013). Computerbased assessment of complex problem solving: concept, implementation, and application. Educational Technology Research and Development, 61, 407421. doi:10.1007/s114230139301x Greiff, S., Wüstenberg, S., Molnár, G., Fischer, A., Funke, J., & Csapó, B. (2013). Complex problem solving in educational settings – something beyond g: Concept, assessment, measurement invariance, and construct validity. Journal of Educational Psychology, 105(2), 364379. doi: 10.1037/a0031856
Leutner, D., Fleischer, J., Wirth, J., Greiff, S., & Funke, J. (2012). Analytisches und dynamisches Problemlösen im Lichte internationaler Schulleistungsvergleichsstudien: Untersuchungen zur Dimensionalität. Psychologische Rundschau, 63, 3442.
Schmid, U., Ragni, M., Gonzalez, C., & Funke, J. (2011). The challenge of complexity for cognitive systems. Cognitive Systems Research, 12, 211218.
Thimbleby, H. (2007). Press on. Principles of interaction programming. Cambridge, MA: MIT Press.
Wüstenberg, S., Greiff, S. & Funke, J. (2012). Complex problem solving: More than reasoning? Intelligence, 40, 114.



