What indicators and other management instruments might have to do with knowledge (maturing)

While MATURE has always been inspired by bottom-up developments (and the concept of knowledge maturing has this an inherent assumption), we have always emphasized the importance of top-down activities as well. We have avoided to use the term “management” here, but rather used the term guidance for that. So far, we have mainly concentrated on value-based and valuation-based guidance (showing by appreciation what is considered good practice, which is usually subsumed in a notion of team/corporate culture), and structural guidance (i.e., the establishment and nurturing of communication structures). Furthermore, we have been struggling with potentials and dangers of incentive structures, mainly monetary/material and career incentives.

This week we were at the Professional Training Facts 2009 in Stuttgart (see here for a summary of MATURE activities at this event). This was a great opportunity to think and discuss about topics around competence development in company. One trend I have spotted was the increasing importance of indicators for competence development and the incorporation of those indicators into management instruments like management-by-objectives. At first sight, this always seems to be a good idea to “professionalize” the learning element in a company by making it measurable. This originates in the assumption that “you cannot manage what you cannot measure”, which is probably true when you want to manage things. The approach promises transparency and can be a step towards calculating an ROI for learning. The big problem, however, does not lie in the approach of defining indicators and measuring them, but rather in the concrete indicators themselves. These indicators do not naturally naturally derive from the topic at hand, but are actually always bound to a notion of an ideal state; they contain the statement: you should have a high score in this indicator – this would be the best option. This is not bad as such, but this fact is rarely reflected, especially because this ideal state is actually context-dependent. It can be different for large vs. small organizations, for innovation-focused vs. efficiency-focused organizations, for service vs. production etc. What happens is that somebody defines (probably with good reasons) a certain set of indicators, and other simply take those indicators and apply them without questioning their value for their context.

What does this have to do with knowledge and knowledge maturing? This has three aspects:

  • This “ideal state” conception as such is a body of knowledge, and it has to be carefully examine if the underlying knowledge about the ideal state is has reached a level of maturity that allows for a standardization in which you usually simply take things and apply them (like, e.g., for many financial indicators). Or if we are on a lower level of maturity and have to develop from there our own answer to the question “what is the ideal state”. In this learning and maturing process we have to learn about the contextual factors that differentiate us from others. And even if we take and apply standardized things, we should allow and encourage questioning usefulness at any time.
  • Indicators are not only about measuring, they are about management and guidance. They aim at changing the behaviour by explicitly or implictly encouraging to become “better”. Even if we know sufficiently about the ideal state, do we know enough how a certain set of indicators (potentially tied together with a complex formula) influences the behaviour? Is our knowledge about that mature enough to make them the basis for formalized instruments (like reward schemes, but also career decisions)? Can we differentiate between correlations and causal relationships? Can we separate external factors? Or should be modest enough to consider them what they are: indicators that measure something, but not the wealth of reality, and use them as a reflection instrument – and in the end maybe come to the conclusion that they measure nothing of interest.
  • We are currently researching indicators for knowledge maturing both in the empirical and the technical-conceptual strand of the MATURE project. We should be aware that indicators always derive from a concept of ideal state, which is difficult to envision as a whole. So we will base those indicators based on our pre-conceptions (which has a lot to do with our value systems) – and we should carefully reflect on this problem.

As a conclusion: measuring can be very helpful for many aspects, also on the soft side, but we should understanding the development and application of such measuring instruments as a collaborative learning process which should involve many. Then this process and its result can be also a good guidance instrument.

Technorati-Tags: