Parallel advances in mathematics and communications technologies in the late twentieth century came together to inform the complexity sciences. These were in the related fields of information theory and computer science. Together these theoretical developments helped researchers better understand the concept of "entropy" from the second law of thermodynamics.

Information theory involves the formal study of uncertainty versus predictability and how these ideas interrelate as events unfold. Also important is the question of how "information" can be communicated from one place to another or from one person to another. Through this formalization, clearer definitions of the important notion of informational "entropy" can be formulated (Cover & Thomas, 2004).

In parallel, researchers such as Kolmogorov, Caitlin, and Crutchfield from the related field of computing theory developed approaches to complexity that defined "algorithmic complexity" as the the size (defined in various ways) of a computer algorithm that could run on an ideal machine and then stop, predicting the outcome of events in the system. The more complex the system, the larger the algorithm would need to be to predict its outcomes.

These ideas were further explored (Feldman & Crutchfield, 1998) to define effective complexity when predicting events, but only at a certain level of coarse-grained scale, the "forest". In this way of thinking, finer-grained events, the details in the "trees", can be ignored (Prokopenko, Boschetti & Ryan, 2009).

Thus, the notion of "effective complexity" or "statistical complexity" at a particular level of coarse-grained scale enables complexity thinking to be applied to emergent phenomena if observed properties of those phenomena can be predicted with finite algorithms or more generally, with *models *(Hazy & Ashley, 2011). There is a nuance here, however, since models of effective complexity always involve the choice of a context, a level of scale, by the observer/modeler. Thus, there is always a subjective choice about what is important and relevant and what can be ignored. Interestingly, this also implies that there is always the need to consider the significance of the question: to whom is the phenomenon relevant? And why is this so? In social systems, this issue can be a deep one.

Cover, T., & Thomas, J. A. (2004). *Elements of Information Theory 2nd Edition*. ISBN 9780471241959.

Crutchfield, J. (1994). Is anything ever new? Considering emergence. In G. Cowan, D. Pines, & D. Meltzer (eds.), * Complexity: Metaphors, Models, and Realty, ISBN 0201626063, *pp. 515-537.

Crutchfield J.P. & Feldman, D.P. (1997). Statistical complexity of simple one-dimensional spin systems, * Physics Review E, *ISSN: 1539-3755, 55 (2), 1239-1242.

Crutchfield J.P., & Feldman, D.P. (2003). Regularities unseen, randomness observed: The entropy convergence hierarchy. * Chaos, *ISSN:1054-1500, 15: 25-54.

Epstein, J. M. (1997). * Nonlinear dynamics, mathematical biology, and social science *(Vol. IV). ISBN 9780201419887.

Feldman D. P., & J. P. Crutchfield (1998). Statistical measures of complexity: Why? * Physics Letter A*, ISSN: 0375-9601, 238 (415), 244-252.

Hazy, J.K., & Ashley, A.S. (2011). Unfolding the Future: Bifurcation in Organizing Form and Emergence in Social Systems. *Emergence: Complexity and Organziation, *13(3).

Prokopenko, M., Boschetti, F. and Ryan, A. J. (2009). An information-theoretic primer on complexity, self-organization and emergence. *Complexity, ISSN 1099-0526, 15(1), 11-28.** *

concrete5 - open source CMS
© 2019 Complexity and Society.
All rights reserved. Sign In to Edit this Site