FENS’25
XIII Polish Symposium on Physics in Economics and Social Sciences
22-23 September 2025
Warsaw University of Technology
& University of Warsaw

About the Event
XIII Polish Symposium on Physics in Economics and Social Sciences FENS’25 is a well-established event that has been bringing together physicists, economists, and social scientists interested in applying statistical physics methods to challenges encountered in their respective fields.
The 13th FENS Symposium is scheduled to be held in Warsaw 22-23 September 2025. This symposium is jointly organized by the Faculty of Physics of the Warsaw University of Technology and the University of Warsaw, and is endorsed by the Polish Physical Society.
The event aims to foster interdisciplinary collaborations and facilitate the exchange of ideas among researchers from different fields.
Important dates
15th of July1st of August 2025 – registration and contribution submission23th of July15th of August 2025 – submission acceptance6th26th of August 2025 – final date of reduced fee payment- 26th of August – 2025 final date of the fee payment
Scope
The scope of topics in socio-economic complex systems encompasses a wide range of subjects, including, but not limited to:
- structure and evolution of complex networks in socio-economic context
- role of complex adaptive systems in socio-economic phenomena and processes
- dynamic of conflicts and social polarisation
- behavioral modelling and irrational choice, the role of random chance and information in economic and social phenomena
- modeling of opinion evolution and the spread of innovation
- complexity and emergence in socio-economic systems
- computational methods in economy and social sciences
- financial time series: equilibrium and non-equilibrium properties, linear and non-linear, fractal and multifractal, memory effects, correlations and dependencies, non-stationarity
- random matrix theory
- game theory
- algorithmic value investing on stock market
- thermodynamic formalism in economy
- role of non-extensiveness on financial markets
- role of extreme and superextreme events on financial markets
- risk management and propagation of risk vs. share of wallet; financial engineering
- models of market dynamics, especially agent-based models in micro- and macroscale
- artificial intelligence forecasting

About the Venue
The conference will take place at the Auditorium of the Faculty of Physics, Warsaw University of Technology. Located in the heart of Warsaw, the Faculty of Physics offers a modern and welcoming academic environment within one of Poland’s most prestigious technical universities. The auditorium provides excellent facilities for lectures, presentations, and discussions, ensuring a comfortable and inspiring setting for all participants. The venue is easily accessible by public transport, with several tram and bus lines connecting it to other parts of Warsaw, including the main train station and major hotels.
Address:
Faculty of Physics
Warsaw University of Technology
ul. Koszykowa 75
00-662 Warsaw, Poland
Invited Speakers
We are honoured to welcome a distinguished group of invited speakers whose expertise and contributions have significantly advanced their respective fields. Their lectures will provide valuable insights and serve as a cornerstone of the conference programme, fostering interdisciplinary dialogue and inspiring further research.

Marián Boguñá Espinal
A full professor and Icrea Academia researcher at the Departament de Física de la Matèria Condensada, Universitat de Barcelona. Currently, his main research interests focus on Complex Systems and Complex Networks, two exciting and multidisciplinary fields of research that apply Statistical Physics techniques to the understanding of the many networked systems around us.

Jarosław Kwapień
Professor at Institute of Nuclear Physics, Polish Academy of Sciences. His research interests involve complex systems, econophysics,
quantitative linguistics, network science, and other interdisciplinary
topics.

Piotr Fronczak
A professor at the Faculty of Physics, Warsaw University of Technology, whose research focuses on theoretical and mathematical physics applied to complex and nonlinear phenomena, including complex networks and multi-agent systems.
Programme
Below is the detailed programme for each day of the Symposium.
Day 1
8:30
Registration
9:45
Symposium Opening
10:00
Marián Boguñá Espinal
Embedding Complex Systems in Hyperbolic Space: A Geometric Lens on Networks
Network geometry posits that the architecture of real-world complex networks is governed by hidden hyperbolic metric spaces: nodes occupy positions in an underlying negatively curved geometry, and observable connections arise primarily from their hyperbolic
distances. This perspective unifies—and quantitatively explains—hallmark properties of complex systems, including heavy-tailed degree distributions, small-world behavior, strong clustering, self-similarity, community structure, and efficient navigability, while also furnishing a renormalization-group formalism for networks. In this talk, I will give a brief introduction to this exciting topic and highlight some of the most interesting applications in Economic and Social sciences. Our results illustrate how negative curvature offers not only a unifying theory of network architecture but also practical tools for inference, control, and design in large-scale complex systems.
10:45
Piotr Fronczak
Fractal Complex Networks: Theory and Modeling
Fractal complex networks represent a distinct structural class, characterized by self-similarity, modularity, and long-range correlations that differentiate them from purely random or small-world architectures. In this presentation, I will introduce the defining features of fractal networks and explain how their geometry can be systematically explored through network renormalization. I will discuss state-of-the-art techniques for calculating their fractal dimension, including box-covering approaches and scaling relations, and show how these methods reveal invariant properties under coarse-graining. The talk will also address the theoretical framework of scaling in networks, outlining how exponents governing mass, diameter, and degree distributions are interrelated. Finally, I will present generative models of fractal networks that capture the structural features of real-world systems, including the Internet, the World Wide Web, and co-authorship networks.
11:30
Coffee Break ☕
12:00
Mateusz Wilinski
Interacting Agents for Limit Order Book Modelling
Zero-intelligence models of limit order books proved to be a useful tool in deciphering many statistical properties observed in financial data [1]. Although there were many extensions, including ϵ-intelligence models and beyond [2], it comes as a surprise that recent models still rely only on market mediated interactions between agents [3]. Moreover, events such as Game Stop short-squeeze, driven by social media interaction [4], highlight the need to understand, how external interactions between market participants can affect the order flow and price formation.
In order to close the described gap, we propose an extension to the standard zero-intelligence agent-based model, where an external interaction mechanism is introduced. As we show in our work, this simple addition can significantly change the properties of observed price autocorrelation. What is more important, qualitative results are strongly connected to the graph structure, with networks suited to describe social interactions producing longer memory and wider price distribution.
[1] MG Daniels, JD Farmer, L Gillemot, G Iori, and E Smith. Quantitative model of price diffusion and market friction based on trading format as a mechanistic random process. Physical review letters, 90(10):108102, 2003.
[2] K Jain, N Firoozye, J Kochems, and P Treleaven. Limit order book simulations: A review. arXiv preprint arXiv:2402.17359, 2024.
[3] J Staccioli and M Napoletano. An agent-based model of intra-day financial markets dynamics. Journal of Economic Behavior & Organization, 182:331–348, 2021.
[4] Z Umar, M Gubareva, I Yousaf, and S Ali. A tale of company fundamentals vs sentiment driven pricing: The case of gamestop. Journal of Behavioral and Experimental Finance, 30:100501, 2021.
12:20
Anton Josef Heckens
Ultrafast Extreme Events: Mechanisms and Recovery in a Historical Perspective
To understand the emergence of Ultrafast Extreme Events (UEEs), the influence of algorithmic trading or high–frequency traders is of major interest as they make it extremely difficult to intervene and to stabilize financial markets [1, 2]. Here, we compare various characteristics of UEEs over different years for the US stock market to assess the possible non-stationarity of the effects. We show that liquidity plays a dominant role in the emergence of UEEs and find a general pattern in their dynamics. We also investigate the after–effects in view of the recovery rate. We find common patterns for different years. We explain changes in the recovery rate by varying market sentiments for the different years.
[1] N. Johnson, G. Zhao, E. Hunsader, H. Qi, N. Johnson, J. Meng, B. Tivnan, Abrupt rise
of new machine ecology beyond human response time, Scientific reports 3 (1) (2013) 2627.
[2] T. Braun, J. A. Fiegen, D. C. Wagner, S. M. Krause, T. Guhr, Impact and recovery
process of mini flash crashes: An empirical study, PLoS ONE 13 (5) (2018) e0196920.
12:40
Benjamin Köhler
A New Traders’ Game? – Response Functions in a Historical Perspective
Traders on financial markets generate non–Markovian effects in various ways, particularly through their competition with one another which can be interpreted as a game between different (types of) traders. To quantify the market mechanisms, we analyze self–response functions for pairs of different stocks and the corresponding trade sign correlators. While the non–Markovian dynamics in the self–responses is liquidity–driven, it is expectation–driven in the cross–responses which is related to the emergence of correlations. We study the non–stationarity of these responses over time. In earlier publications, we only investigated the crisis year 2008. We now considerably extend this by also analyzing the years 2007, 2014 and 2021. To improve statistics, we also work out averaged self– and cross–response functions, which represent the responses of the market to itself for a fixed time lag. By comparing these functions across different years, we find significant variations over time, revealing changes in the traders’ game.
13:00
Lunch 🍽️
14:00
Angelika Abramiuk-Szurlej
Rigorous Agent-Based Modeling of Green Practice Diffusion: Analytical Approximations and Validation on Organizational Networks
Agent-based modeling (ABM) is increasingly used to manage pro-environmental behavior change, especially in energy-related contexts. A key advantage of ABM is its ability to model local consumer interactions, which play a crucial role in promoting pro-environmental behavior driven by peer pressure and social norms. However, ABM is often criticized for its lack of rigorous validation and sensitivity analysis. To address these challenges, we refine an existing ABM of green product and practice diffusion, applying Pair Approximation (PA) and Monte Carlo Simulations (MCS) to real-world organizational networks. This approach provides new insights into how well analytical methods can capture diffusion dynamics in social systems.
The model considers two main factors: (1) social interactions among agents, crucial for the spread of energy-related behaviors, and (2) the probability of engagement in a certain behavior. The original model assumed engagement following a logistic function. We propose a modified version where engagement probability is treated as an independent parameter not defined by any specific functional form. The new version can be seen as a general innovation diffusion model that extends beyond pairwise interactions.
We use two analytical methods to analyze the model: Mean-Field Approximation (MFA) and Pair Approximation (PA). These methods are compared with MCS applied to Watts-Strogatz (WS) graphs and organizational networks. The WS graph serves as a controlled environment, allowing us to verify the model implementation and examine how well PA captures diffusion dynamics across different graph parameters. We also validate the model on real organizational networks to examine actual diffusion patterns. By comparing PA and MCS results, we assess the accuracy of analytical methods in predicting adoption dynamics.
The results on the WS graph show that PA provides accurate results when the clustering coefficient is low, but overestimates adoption levels in highly clustered networks. In these cases, we cannot replace the ABM with the analytical approximation. Additionally, the time to reach steady-state adoption levels is longer in clustered networks, showing a “critical slowing-down” effect near phase transitions. This insight is crucial for policymakers and businesses aiming to accelerate green practice adoption within organizations. We also examine organizational networks, where properties such as clustering coefficient and degree distribution, along with global network parameters, help to explain the differences between PA predictions and MC results.
Our study highlights the need to evaluate ABM results with analytical methods and MCS, especially when using real-world data. While PA is useful in less clustered networks, MCS is necessary for accurate predictions in highly structured systems. Future research should examine different network structures to refine analytical methods.
14:20
Piotr Górski
Multidimensional Attributes Make Structural Balance Dynamics Measurable
We study a social network where agents correspond to people, and links are relationships between agents. Each agent possesses a set of attributes. Following the homophily principle, we assume that similar people like each other, whereas dissimilar people dislike each other. This allows us to denote relations as positive (friendly) or negative (unfriendly). Distinguishing the signs of relationships between pairs of agents can be performed for each attribute separately or considering all attributes together. In the former case, we assume an edge is positive when the two agents hold the same attribute. The edge is negative otherwise. We denote such edges as simple edges. In the latter case, for a pair of agents i and j, we calculate a normalized distance in the multidimensional space of attributes, xij , and employ a threshold parameter – tolerance Θ – defining the link as follows: a link is positive when xij ≤ Θ, and it is negative otherwise. We denote such edges as multi-edges.
Structural balance theory (SBT) analyses stability of groups in signed social networks following the principles of ”friend of my friend or enemy of my enemy is my friend” and ”friend of my enemy and enemy of my friend is my enemy”. The theory discusses structures of all sizes focusing on triads – connected trios of people. SBT states that people in unbalanced triads exhibit tension resulting from cognitive dissonance. A willingness to change such a state is a drive towards changes of relations with people within a triad. That is why, according to the SBT, unbalanced triads are less stable than balanced ones.
We apply our signed network construction definition to study the NetSense dataset, which contains data about relationships between university students and students’ opinions on important social topics. For this network, we construct simple and multidimensional triads and test for which conditions SBT principles can be measured in the system. To this aim, we use static and dynamical structural balance metrics, such as the density of balanced triads and triad transition probabilities, respectively. We compare the measures obtained for the real network with those for three different null models and two randomized processes.
Our results show that SBT influence is not observed in the case of simple edges for the analyzed dataset. Triad densities for real networks are not statistically different from densities in null models. However, in the case of multi-edges, for the range of tolerance values, multidimensional triads are significantly more balanced in the real network. This means that structural balance dynamics are measurable only when considering multidimensional attributes. We also propose an agent-based model with triad dynamics causing coevolution of attributes and edge signs. This model reproduces transition probabilities better than randomized processes for a similar range of tolerance values. Summing up, multidimensional attributes are sufficient to measure SBT influence.
[1] Linczuk, Joanna, Piotr J. Górski, Boleslaw K. Szymanski, and Janusz A. Holyst. “Multidimensional Attributes Expose Heider Balance Dynamics to Measurements.” Scientific Reports 2023 13:1 13, no. 1 (September 20, 2023): 1–14. https://doi.org/10.1038/s41598-023-42390-w.
14:40
Agnieszka Czaplicka
Mutual Benefits of Social Learning and Algorithmic Mediation for Cumulative Culture
The remarkable ecological success of humans is often attributed to our ability to develop complex cultural artefacts that enable us to cope with environmental challenges. The evolution of complex culture (cumulative cultural evolution) is usually modeled as a collective process in which individuals invent new artefacts (innovation) and copy information from others (social learning). This classic picture overlooks the growing role of intelligent algorithms in the digital age (e.g., search engines, recommender systems, large language models) in mediating information between humans, with potential consequences for cumulative cultural evolution.
Building on a previous model, we investigate the combined effects of network-based social learning and a simplistic version of algorithmic mediation on cultural accumulation. We find that algorithmic mediation significantly impacts cultural accumulation and that this impact grows as social networks become less densely connected. Cultural accumulation is most effective when social learning and algorithmic mediation are combined, and the optimal ratio depends on the network’s density.
This work is an initial step towards formalising the impact of intelligent algorithms on cumulative cultural evolution within an established framework. Models like ours provide insights into mechanisms of human-machine interaction in cultural contexts, guiding hypotheses for future experimental testing.
15:00
Mateusz Samsel
Universality Classes in the Time Evolution of Epidemic Outbreaks on Complex Networks
Understanding the spread of epidemics in complex networks is of major importance, with both theoretical and practical implications. While much research has focused on steady-state properties and epidemic thresholds, relatively little attention has been given to the temporal evolution of outbreaks-a critical aspect for timely interventions and public health planning. Network-based studies have revealed that structural features such as degree distributions, node degree correlations, and temporal patterns strongly influence epidemic dynamics. Notably, the discovery that scale-free networks exhibit vanishing epidemic thresholds has profoundly impacted epidemic modeling and response strategies.
Here, we address the underexplored question of how epidemic prevalence evolves over time by analyzing the full temporal dynamics of outbreaks using the fundamental susceptible-infected (SI) model on different classes of complex networks. Through a combination of analytical theory and large-scale simulations, we identify two distinct universality classes of epidemic growth, governed by the topology of the underlying network. In small-world networks, prevalence follows a Gompertz-like curve with an initial exponential phase. In contrast, fractal networks exhibit Avrami-type dynamics, typical of spatially constrained systems, with no exponential regime. These growth patterns are robust across a wide range of transmission rates, for which we derive explicit analytical formulas and class-specific scaling relations.
Our findings offer a unified framework that links network structure to epidemic speed and growth profiles, advancing the theoretical understanding of non-equilibrium spreading processes and providing critical insights for the design of efficient containment strategies.
15:20
Robert Jankowski
Music Networks Reveal Structured Uncertainty and Efficient Cognitive Inference
Music, as a structured yet perceptually rich experience, can be modeled as a network to uncover how humans encode and process auditory information. While network-based representations of music are increasingly common, the impact of feature selection on structural properties and cognitive alignment remains underexplored. In this study, we evaluated eight network models, each constructed from symbolic representations of piano compositions using distinct combinations of pitch, octave, duration, and interval, designed to be representative of existing approaches in the literature. By comparing these models through topological metrics, entropy analysis, and divergence with respect to inferred cognitive representations, we assessed both their structural and perceptual efficiency. Our findings reveal that simpler, feature-specific models (e.g., pitch or duration alone) better match human perception, whereas complex, multidimensional representations introduce cognitive inefficiencies. These results support the view that humans rely on modular, parallel cognitive networks—-an architecture consistent with theories of predictive processing and free energy minimization. Moreover, we find that musical networks are structurally organized to guide attention toward transitions that are both uncertain and inferable. The resulting structure concentrates uncertainty in a few frequently visited nodes, creating local entropy gradients that alternate between stable and unpredictable regions, thereby enabling the expressive dynamics of tension and release that define the musical experience. These findings show that network structures make the organization of uncertainty in music observable, offering new insight into how patterned flows of expectation shape perception and open new directions for studying how musical structures evolve across genres, cultures, and historical periods through the lens of network science.
15:40
Michał Łepek
An Algorithm for Finding Fractal Dimensions of Complex Networks Using a Fixed Number of Boxes of Flexible Diameter
Fractal properties of complex networks reveal fundamental aspects of their self-similar structure and scaling behavior, providing insights into the organization of a wide range of natural, social, and technological systems. A common approach to quantifying fractality relies on box-covering algorithms, which partition a network into subgraphs (boxes) and analyze how the number of boxes scales with their size. However, existing algorithms typically impose a fixed box size or require computationally expensive procedures, limiting their applicability to large-scale networks. In this work, we introduce a novel box-covering algorithm that departs from traditional schemes by assigning nodes to boxes centered on their nearest local hubs without enforcing rigid distance constraints. Rather than fixing the box size a priori, we first determine the number of boxes and subsequently compute their average size, a procedure that is consistent with recent scaling theories of fractal networks and with models of hidden metric spaces where nodes are embedded. This methodological shift enables a more natural representation of network organization and significantly reduces computational complexity compared to existing methods. Furthermore, by relaxing distance constraints while still grouping nodes around hubs, our algorithm achieves box partitions with more homogeneous sizes than those produced by classical greedy coloring approaches. We validate our method on nine networks, including three model-based and six real-world examples, covering systems with well-established fractality, networks with previously uncertain but here confirmed fractal properties (such as the Internet at the autonomous system level), and large-scale networks that had remained inaccessible to previous algorithms. Our results demonstrate that the proposed approach is both scalable and accurate, providing a practical and versatile tool for exploring self-similarity and scaling phenomena in complex networks.
16:00
Coffee Break ☕ and Poster Session
Krishnadas Mohandas
Structurally Balanced Growing Network as Randomized Pólya Urn Process
We investigate a process of growth of a signed network that strictly adheres to Heider structural balance rules, resulting in two opposing, growing cliques. New agents make contact with a random existing agent and join one of the cliques with bias p towards the group they made contact with. The evolution of the group sizes can be mapped to a randomized Pólya urn model. Aside from p=1, the relative sizes of the two cliques always tend towards 1/2, but the behavior differs in the anti-bias regime of p<1/2 and the biased regime of p>1/2. Below this threshold, the clique sizes converge to the same size, regardless of initial differences, while above the initial difference in clique sizes remains. This difference is obscured by fluctuations, with the clique size distribution remaining unimodal even above p>1/2, up until a characteristic point pch, where it becomes bimodal, with initially larger and smaller cliques featuring their own distinguishable peaks. We discuss several approaches to estimate this characteristic value. At p=1, the relative sizes of cliques can persist indefinitely, although still subject to fluctuations.
Jan Ostrowski
Information Entropy in Mutating Viruses
A large amount of information about the singular type of virus caused by SARS-CoV-2 during the COVID-19 pandemic has provided unique insights into the stochastic processes connected to mutations at the DNA level and changes in system entropy. Predictions made by biophysical Single Hit Target Models associate DNA damage with an increase in system entropy. However, it turns out that not all mutations exhibit the same nature. Using real-world data provided by the National Center for Biotechnology Information, it is evident that viruses, as complex systems, are capable of decreasing Shannon’s entropy in the context of their DNA. The analysis of mutating viruses offers unique insights into evolving and self-adapting systems that are far from thermodynamic equilibrium. Naïve simulation methods, which assume complete randomness in the processes of mutation and selection, are not compatible with the observed phenomena. The existence of bifurcations, local maxima, and visible “flares” calls for the development of new and more complex models. With multiple examples demonstrating this observed trend (such as COVID-19, HIV, and influenza), and with both classical statistical methods and Bayesian robust regression confirming the existence of this trend, this could represent a significant step toward developing methods for predicting future mutations and the directions of viral evolution.
Maciej Leszczyński
Definition and Analysis of Structural Entropy of Graphs on an Example of Financial Networks
The aim of this work was to propose a new measure of structural entropy of a graph to analyse changes in the structure of correlation in financial networks. Structural entropy of a graph was defined based on Shannon’s information entropy, calculated with probability of finding graph motifs in the studied graph. Analysis of structural entropy was conducted on select random graphs (Barabasi-Albert; graphs of random distance matrices of uniform distribution), as well as on graphs representing companies traded on London Stock Exchange (LSE) and which were constituents of the FTSE 100 index in 2016. Real networks were based on distance matrices calculated using Mantegna’s distance, further used (both real and random) to construct MST and threshold filtered graphs. Real distances were derived from correlations of daily returns from daily closing prices of stocks traded on LSE, for chosen, overlapping, time windows of lengths: 5, 20 and 60 days, within the period of 2014-2018. Following effects of political and economic events were observed: Brexit referendum had no significant impact on correlation structure measured by structural entropy, with respect to the average value of the test period (01.01.2014-07.05.2015). Time window and filtering method configurations with short window length (5, 20 days) showed a significant (over 2 standard deviations) decrease of entropy following the crash on Chinese financial markets from 24.08.2015 (for the 5 day window, 4 out of 6 thresholds). In 10 out of 21 configurations, the fluctuations of structural entropy from the mean of the test period were low over the whole studied period (remaining thresholds of 5 day window: 0.25, 0.25-0.75; all MST configurations). The analysis of configurations with longer time windows (20 days: 0.25; 60 days: 0.25, 0.75, 0.25-0.75) was prevented by high standard deviations of the test period or the inability to determine the standard deviation value.
Ignacy Czajkowski
Localization of Multiple Sources of Interacting Information in Complex Networks
The problem of source localization in complex networks is a rapidly developing field with numerous real-world applications, such as determining the credibility of information by identifying its original authors. However, the models commonly used to simulate information propagation typically do not account for real-world phenomena such as information overload and interactions between multiple simultaneously spreading pieces of information.
To investigate how the phenomenon of information overload affects the effectiveness of information source localization, I introduced two versions of a multi-information propagation model (mGFSIR) in this work, each incorporating a parameter that governs the strength of information overload. The first version distinguishes between information from different sources, while the second treats all information equally.
For two synthetic networks (ER and BA) and three real-world networks, I simulated the spread of multiple pieces of information using both versions of the mGFSIR model. I then estimated the sources of the information using Pearson’s localization algorithm. By comparing the estimated sources with the actual ones, I evaluated localization effectiveness using precision and ranking metrics across various values of the model parameters.
For both versions of the model, I studied two scenarios: in the first, a varying number of pieces of information (from 1 to 5) were propagated at the same spreading rate; in the second, I examined the effectiveness of localizing the main piece of information when four additional noise pieces of information were simultaneously propagated.
The results of the simulations showed a decline in localization effectiveness with increasing information overload and a greater number of simultaneously propagating pieces of information. This decline was more pronounced in the version of the model that treated all information equally. For all studied networks and both model versions, it was observed that the effectiveness of localizing the main piece of information decreased with increasing propagation speed of the noise information, in cases where information overload was present.
Kordian Makulski
Modeling of Fractal Complex Networks With Flexible Parameter Adjustments
Fractality is observed in a wide variety of real-world complex networks, including systems as diverse as the World Wide Web, biological networks such as protein-protein interactions, and social networks like co-authorship graphs. Despite the prevalence of fractality, existing models of fractal networks are typically constrained in their ability to reproduce this feature in a general and tunable manner. Most conventional approaches rely on deterministic construction rules, often resulting in networks with restricted sizes and abrupt jumps in degree distributions. Limitations of these models reduce their applicability to real-world fractal systems.
We introduce a model of evolving fractal networks based on inverse renormalization. The growth process is governed by a set of stochastic rules, overcoming typical limitations of size and degree distribution. A key feature of our model is the incorporation of multiple tunable parameters. This design enables independent manipulation of important network features such as the box-counting dimension and the degree distribution.
To demonstrate the practical relevance and adaptability of the model, we apply it to reproduce the structural characteristics of three well-studied real-world complex networks: a scientific co-authorship network, the World Wide Web, and the Internet at the autonomous systems level. In each case, the model successfully reproduces the observed degree distributions, box-mass distributions and fractal dimensions. Our results suggest that the proposed framework provides a flexible foundation for further research in the field of fractal complex networks.
Jan Rawa
Fake News Influence on Information Overload
As the significance of the opinion-shaping function of the World Wide Web grows, the spread of false news has transformed into a source of dangerous social movements such as “anti-vaxxers” or COVID-19 skeptics. The tragic experiences of the 2019–2022 pandemic demonstrate that the spread of fake news online can have devastating consequences for society.
Information overload is a phenomenon that has accompanied humanity even before the dawn of the information era. Time constraints have made it impossible for hundreds of years to become a so-called “Renaissance person.” The widespread access to information has significantly increased the portion of the population exposed to information overload.
Data used in the study comes from the social media platform Reddit. Content used in the analysis got published in communities related to COVID-19 topics and spans the years 2019–2023.
Impact of information overload on the spread of fake information was examined. Statistical text analyses were conducted, and technologies such as topic modeling and artificial intelligence were employed.
Four information overload measures were proposed. The analysis revealed a correlation between the proportion of fake information and one of the proposed information overload measures.
17:00
Julian Sienkiewicz
Big Tech Influence Over AI Research Revisited: Memetic Analysis of Attribution of Ideas to Affiliation
There exists a growing discourse around the domination of Big Tech on the landscape of artificial intelligence (AI) research, yet our comprehension of this phenomenon remains cursory. This paper aims to broaden and deepen our understanding of Big Tech’s reach and power within AI research. It highlights the dominance not merely in terms of sheer publication volume but rather in the propagation of new ideas or memes. Current studies often oversimplify the concept of influence to the share of affiliations in academic papers, typically sourced from limited databases such as arXiv or specific academic conferences.
The main goal of this paper is to unravel the specific nuances of such influence, determining which AI ideas are predominantly driven by Big Tech entities. By employing network and memetic analysis on AI-oriented paper abstracts and their citation network, we are able to grasp a deeper insight into this phenomenon. By utilizing two databases: OpenAlex and S2ORC, we are able to perform such analysis on a much bigger scale than previous attempts.
Our findings suggest that while Big Tech-affiliated papers are disproportionately more cited in some areas, the most cited papers are those affiliated with both Big Tech and Academia. Focusing on the most contagious memes, their attribution to specific affiliation groups (Big Tech, Academia, mixed affiliation) seems equally distributed between those three groups. This suggests that the notion of Big Tech domination over AI research is oversimplified in the discourse.
17:20
Hubert Kołcz
CHSH-Based Adversarial Detection for Mechanism Design
Contemporary democratic institutions face systematic threats from sophisticated adversarial manipulation, particularly through LLMs that exhibit significant exploitation vulnerabilities in strategic scenarios. Research demonstrates that advanced LLMs maintain baseline performance levels of 57% for GPT-4, 62% for DeepSeek-V3, and 38% for Gemini-1.5 in decision-making tasks, but show dramatic vulnerability increases to 93%, 95%, and 94% respectively following strategic adversarial intervention. These vulnerabilities expose fundamental limitations in traditional mechanism design approaches, which cannot distinguish between authentic citizen preferences and decision-making processes with artificially induced biases.
This research introduces a quantum-safe algorithmic mechanism design framework that integrates CHSH game rigidity with democratic governance protocols to detect and counter adversarial influence. The framework addresses the computational complexity challenge where optimal mechanism design is #P-hard by leveraging quantum correlation testing as a polynomial-time verification protocol. CHSH scores serve as institutional integrity certificates: legitimate democratic processes maintain correlations within classical bounds (S ≤ 2), while adversarial manipulation may produce systematic deviations that approach quantum correlation limits near 2√2.
The proposed methodology implements device-independent verification protocols adapted for institutional applications, where democratic decision nodes engage in CHSH-style correlation testing to certify communication integrity without trust assumptions. Building on DI-QKD protocols, the framework provides mathematically verifiable democratic authenticity through quantum correlation auditing. Quantum change-point detection algorithms enable rapid identification of coordinated attacks against institutional winning criteria, specifically targeting LLM-facilitated bias injection in governance decision making.
17:40
Mateusz Ozimek
Information Flow in ECG Signals: Nonlinear Dynamics of Heart Rhythm and Repolarization
This study investigates the applicability of nonlinear markers based on information theory to the analysis of ECG-derived time series for identifying physiologically relevant interactions between components of the cardiac cycle. The goal is to evaluate the diagnostic potential of entropy-based measures in distinguishing normal and pathological cardiac dynamics using standard Holter ECG recordings. Information-theoretic methods offer a general approach to analysing complex, multivariate time series governed by interdependent processes. In this study, the cardiac cycle is examined as a specific example of such dynamics.
The analysis involves univariate and multivariate markers derived from RR, QT and DI intervals, as well as time series of R- and T-wave amplitudes. ECG data were obtained from the THEW Project databases. Directional interactions are quantified using bivariate and trivariate transfer entropy and conditional entropy decomposition. These measures are used to construct physiological interaction networks that reflect information flow between processes related to heart rhythm, repolarization, and rest phase of the cardiac cycle.
The methodology is applied to data from healthy individuals and patients diagnosed with Long QT Syndrome, coronary artery disease, and hypertrophic cardiomyopathy. Statistically significant differences in entropy-based parameters are identified across groups.
Feature sets based on information flow metrics are further evaluated using machine learning classifiers. The results confirm that such measures capture aspects of nonlinear signal dynamics and may contribute to risk stratification and screening in cardiology. Although derived from univariate ECG recordings, the information flow measures support time series classification. Multivariate analysis further enhances the detection of arrhythmic characteristics using standard ECG.
18:00
Maciej J. Mrowiński
Interplay Between Tie Strength and Neighbourhood Topology in Complex Networks
Granovetter’s “strength of weak ties” theory posits a systematic link between tie strength and local topology of networks: strong ties bind densely interconnected groups, while weak ties bridge groups and enable diffusion of information. In collaboration networks, edge weights (e.g., repeated coauthorships) serve as proxies for tie strength, and neighbourhood overlap measures interconnectedness as the fraction of shared neighbours relative to the union of neighbours of two nodes. Yet the scientific collaboration network has often been cited as a counterexample of Granovetter’s theory, because with standard (symmetric) definitions the average neighbourhood overlap decreases as edge weight increases.
We show that this apparent contradiction arises from imposing symmetry on relations that are effectively directional. For example, in scale-free networks, the two endpoints of an edge can have very different degrees. We therefore define asymmetric versions of tie strength and neighbourhood overlap that take each node’s perspective into account. We show that under these asymmetry-aware measures, the collaboration network aligns with Granovetter’s picture: strong ties correspond to large overlaps, whereas weak ties have low overlap and bridge communities.
Correlations in social networks can be structural (implied by topology) or sociological (driven by processes that determine the weights). To disentangle these, we use correlation profiles that benchmark the empirical network against weight-shuffled null models preserving degrees and strengths. This filters out structural effects and confirms that the observed correlation between weights and overlap exceeds topological baselines. Moreover, similar Granovetter-like relationships appear beyond collaboration networks, including in metabolic and neural networks.
18:30
Conference Dinner 🍽️
Day 2
8:30
Meeting of FENS Section and Morning Coffee ☕
10:45
Jarosław Kwapień
Filtering Amplitude Dependence of Correlation Dynamics in Complex Systems: Application to the Cryptocurrency Market
A fundamental characteristic of complex systems is the nonlinear interactions between their constituent elements. In such
systems, the evolution is typically driven by multiple generators. The related signals typically comprise effects induced by different generators that dominate at different times. Complexity may be encoded in nonlinear temporal dependencies within the sequence of fluctuations, but also in their amplitude. It is often more pronounced in fluctuations within a certain amplitude range rather than in those outside of it. Typically, large fluctuations exhibit larger complexity compared to small fluctuations, which are often overwhelmed by noise. It is crucial to employ a tool capable of distinguishing data that are relevant to the structural complexity from those that are not. We present a methodology for the analysis of multivariate time series based on the detrended, multiscale cross-correlation coefficient ρq(s), which enables selection of signal amplitude ranges whose interdependencies are of interest. This methodology is illustrated using empirical data from the cryptocurrency market. The evolution of multiscale cross-correlations between cryptocurrencies is examined for various time horizons s and for different ranges of return amplitudes selected by the Rényi index q. Based on these correlations, one can construct undirected networks, in which individual cryptocurrencies are nodes and cross-correlations define the weights. Substructures of such networks, such as multiscale minimum spanning trees (qMSTs), can be extracted and their topology analyzed for different amplitude ranges and in different time periods. One gains insight into the market structure in this way, its susceptibility to shocks and other market events, as well as identifies assets that lead market dynamics. The proposed methodology can be applied to study multivariate time series from various systems. However, in the case of financial markets, it is particularly useful due to the quality of data.
11:30
Coffee Break ☕
12:00
Aleksejus Kononovicius
Periodic Polling Effects on the Opinion Dynamics in the Noisy Voter Model
Two core components of the noisy voter model are exploration and peer pressure. Exploration of the opinion space is often assumed to be driven by an erratic flipping of the individual agent states (i.e., noise). Peer pressure is assumed to be mediated via direct interactions between the agents, often implemented as a randomly selected agent copying the state of their randomly selected peer during a single MC simulation step. Depending on the parametrization of the model, the model will either converge towards a stationary point or to a steady state distribution (which is the Beta distribution). The latter case is fascinating, as the model can then reproduce electoral data [1].
Here, instead, we assume that peer pressure is exerted indirectly through public information, which is revealed periodically [2]. We show that when the information announcement period is long, the variance of the steady state distribution decreases monotonically. When information announcement is periodic and announced with delayed (as is somewhat common with real-world polls), the variance decreases compared to the non-delayed noisy voter model, but the scaling has a non-trivial shape. Announcement delays, as ought to be expected, also induce periodic fluctuations.
[1] A. Kononovicius. Complexity 2017: 7354642 (2017). doi: 10.1155/2017/7354642.
[2] A. Kononovicius et al. Physica A 652: 130062 (2024). doi: 10.1016/j.physa.2024.130062.
12:20
Anna Chmiel
Temperature-Noise Interplay in a Coupled Model of Opinion Dynamics
We consider a coupled system mimicking opinion formation under the influence of a group of q neighbors (q-lobby) that consists of an Ising part governed by temperature-like parameter T and a voter dynamics parameterized by noise probability p (independence of choice). Using rigorous analytical calculations backed by extensive Monte Carlo simulations, we examine the interplay between these two quantities. Based on the theory of phase transitions, we derive the relation between T and p at the critical line dividing the ordered and disordered phases, which takes a very simple and generic form T(p-a)=b in the high temperature limit. For specific lobby sizes, we show where the temperature are noise are balanced, and we hint that for large q, the temperature-like dynamics prevails
12:40
Rytis Kazakevičius
Anomalous Diffusion and Time-Dependent Statistics in a Scaled Voter Model
Recently, several non-Markovian extensions of the voter model have been proposed, offering alternatives to the classical Markovian formulation. For example, Artime et al. [1] examined the effects of state aging, showing that it leads to a frozen, discordant state, unlike the consensus typically reached in the standard voter model. Raducha and San Miguel [2] introduced a co-evolutionary framework, where the interaction topology evolves alongside individual states, and applied it to model competition between languages and dialects.
In this work, we aim to replicate certain non-Markovian features without introducing explicitly non-Markovian mechanisms. To this end, we incorporate scaled Brownian motion (SBM) into the noisy voter model. SBM captures key characteristics of fractional Brownian motion (fBm), such as time-dependent diffusivity. We focus on the scaled voter model, an extension of the noisy voter model in which the intensity of herding behavior evolves as a power-law function of time. Under this regime, the model effectively becomes a noisy voter model driven by SBM.
We derive analytical expressions for the time evolution of the first and second moments of the system, as well as an approximation for the first-passage time distribution. Numerical simulations support our analytical findings and further reveal that, despite its Markovian nature, the model exhibits signs of long-range memory. Moreover, the use of SBM introduces non-stationarity in the increment distribution—a property observed in empirical data such as the S&P 500 time series [3].
References
[1] O. Artime, A.F. Peralta, R. Toral, J.J. Ramasco, M. San Miguel, Phys. Rev. E 98 (2018) 032104.
https://doi.org/10.1103/PhysRevE.98.032104
[2] T. Raducha, M. San Miguel, Sci. Rep. 10 (2020) 15660.
https://doi.org/10.1038/s41598-020-72662-8
[3] P.G. Meyer, M. Zamani, H. Kantz, Physica A 612 (2023) 128497.
https://doi.org/10.1016/j.physa.2023.128497
13:00
Lunch 🍽️
14:00
Łukasz Brzozowski
The Price-Pareto Model With Global Community Structure
Citation networks play a central role in bibliometrics and the science of science, yet existing models often neglect the influence of community structure – an essential feature in many real-world networks. While communities are known to shape the evolution of social and biological networks, their impact on citation dynamics remains underexplored.
In our work, we generalize and extend an existing citation model – the 3DSI model – by introducing community-aware growth. Each node is assigned to a known community (e.g., a scientific discipline), and the model accounts for intra- and inter-community citation dynamics. The 3DSI model in its basic formulation assumes that the citations are governed not only by the preferential attachment, but also by an accidental one; the same assumptions in a new setting enable us to derive new analytical formulas for many network parameters, such as degree distributions within individual communities and estimates of the ratio between preferential and accidental citations.
We validate our model on the DBLP citation network, showing that incorporating community structure significantly improves model fit. Moreover, it provides new insights into citation behaviors across disciplines by separately modeling community-specific degree sequences. Our framework enhances the interpretability of citation networks and offers a flexible foundation for studying their structure and growth.
14:20
Robert Paluch
Location of Propagation Sources in Complex Networks Using Effective Propagation Distance
Identifying the origin of a spreading process, such as an epidemic outbreak or information diffusion, is a key challenge in network science.
Source location methods are typically categorized as snapshot-based or observer-based, with the latter relying on infection times recorded by selected nodes.
Recent research suggests that correlation-based methods (a subcategory of observer-based methods) often provide the highest localization accuracy, assuming that the further an observer is from the source, the later it gets infected.
We improve upon this approach by introducing a weighted Pearson correlation, prioritizing early observers with a decreasing weight function. Additionally, we propose a novel distance metric, the Effective Propagation Distance (EPD), which accounts for all possible propagation paths instead of relying solely on geodesic distance. The EPD is derived from multiple simulations of the Susceptible-Infected (SI) model, providing a more realistic measure of information travel times.
Extensive numerical experiments on real networks demonstrate that weighting early observers enhances localization accuracy, with optimal weighting parameters depending on network topology and transmission properties. Furthermore, EPD consistently outperforms geodesic distance in all tested scenarios. These findings indicate that our approach significantly improves source localization efficiency, paving the way for applications in real-world propagation detection.
14:40
Grzegorz Siudem
Price-Pareto-Gini Model for Evolving Networks
We consider the Price model for an evolving network, i.e., a growing graph, in which, at every turn, we add a new vertex and join its edges to the existing vertices with a mixture of the preferred attachment rule and purely accidental. We can derive the models’ expected vertex degrees and show that they are consistent with the order statistics from the Pareto type-2 distribution. Moreover, we can show that for such a dynamical system, the Gini index (well-known from econometrics inequality measure) is invariant. We also present the application of the described model for the real data from citation networks. Most of the results discussed were obtained in cooperation with Marek Gagolewski and Barbara Zogala-Siudem, but we also collaborated with Anna Cena, Lucio Bertoli-Barsotti, Przemysław Nowak and Maciej J. Mrowinski on individual issues.
15:00
Janusz Hołyst
Protect Our Environment From Information Overload!
We are now exposed daily to more information than we can process and this has substantial costs. I argue that the information space should be recognized as part of our environment and call for research into the effects and management of information overload.
In today’s world, access to information thought of as the resolution of uncertainty; is often considered as a benefit or even as an indisputable human right. There is, however, the “dark side” of information: the abundance of data beyond one’s capacity to process them leads to so-called information overload (IOL). This notion had troubled mankind long before even the print was invented and examined from different points of view, ranging from neuroscience to journalism. IOL is, however, usually considered at the individual level by examining a single factor or a specific level that eventually leads to switching off an active individual. The influence of IOL appearing simultaneously at different levels, i.e., a multilevel information overload is unknown, though. These observations lead to setting the main aim of the international project supported by EU grant: OMINO – Overcoming Multilevel Information Overload. The OMINO project aims to measure multilevel IOL in different systems as well as methods to model IOL and to develop counter-measures to mitigate this phenomenon.
References
1. https://ominoproject.eu
2. J.A. Hołyst et al, Protect our environment from information overload, Nature Human Behaviour, v.8, pages 402–403 (2024)
15:20
Czesław Mesjasz
Numbers in Econophysics and Sociophysics: Social Constructs, Operationalization, Operationism and Measurement
The key issue in applying all kinds of mathematical models in empirical studies of economic and social phenomena is the numerical data, their availability and identification. Too frequently, the numbers are applied in such studies without a sufficiently profound reflection. It concerns both the classical social, economic and financial studies based on observation of specific cases and measurement of their properties as well as the applications of various statistical methods.
This problem also affects attempts to apply methods from other disciplines in studying society. Frequently, the studies conducted by authors competent in mathematics, physics, chemistry, biology and related disciplines include applications of mathematical models in studying various aspects of economics, finance, and collective social phenomena. When asked about data and their relevance to economic and social phenomena, these authors respond that their attention is focused solely on the mathematical sophistication of the models and their predictive power. A deepened reflection on collecting the numerical data for the models is missing or insufficient.
The problems of “numbers” – fundamentals of methodology of measurement are one of the most challenging in all disciplines of science. It has been widely discussed, although no universal answers will ever be found. Measurement in physics is regarded as a pattern for other areas of research. It is treated as objective, and this term demands a more detailed explanation.
In the search for numerical data concerning all financial, economic and social phenomena, the main problem is that they are social constructs and the role of the observer must be taken into account.
Econophysics and sociophysics, as areas of development of mathematical modelling, are especially prone to the challenges of measurability and relevance to economic and social reality. For the sake of clarity, the following explanation is needed. They can be treated as an extension of the applications of classical physics to the studies of economic, financial and social phenomena. There is an important conceptual problem with the distinction between econophysics and sociophysics. Sometimes sociophysics is treated as a separate domain of knowledge dealing with collective social phenomena, e.g. (Abergel et al. 2017; Kutner et al. 2022). A narrowed idea of sociophysics based on percolation was developed by Galam (2004). In some considerations, the studies of collective social phenomena with application of variously defined complex systems, are treated as a part of econophysics (Ball 2006).
In this chapter, another distinction is proposed. The terms social phenomena, social systems and in some instances, social events are applied to the collective phenomena with a full awareness of all subtleties associated with each of these terms. It is also borne in mind that this distinction is somewhat fuzzy, e.g. collective behavior of actors in financial models. However, due to some differences between the sense of mathematical models and measurement in economics, finance and in studying the properties of collective social phenomena (social systems), this distinction is maintained herein. Whenever necessary, additional explanations are added.
The question about this distinction is not a void effort because the sense of measurement in sociophysics is more intricate than in “classical”, finance-oriented econophysics. Numerical data concerning collective social behaviour are often more complex social constructs based on qualitative considerations in sociology, psychology, management, political science, security studies, etc. That is why their interpretations as analogous with physical entities are sometimes of a limited value. Awareness of those subtleties should be considered when the role of measurement and numbers in econophysics and sociophysics is considered.
The tradition of applying ideas from physics to study broadly defined economic, financial, and social phenomena is well-known. As the history of the modern economy shows, it was developed as a specific form of the physics of society, and the impact of ideas borrowed from physics on the economy, finance, and social sciences was a constant element of their development. This phenomenon was described by multiple authors, from whom only some are quoted here: economics (Mirowski 1989; 1994; Bernstein et al. 2000; Ball 2004). It led to the emergence of the “physics envy”, i.e. a specific tendency to imitate physics in economics (Mirowski 1989)
In the study of the role of measurement and numbers in econophysics and sociophysics, the following phenomena must be taken into account. In econophysics, initiated by Eugene Stanley and continued by other authors, the dominant role was played by the availability of a large amount of financial data (Mantegna and Stanley 2000, p. viii): “virtually every economic transaction is recorded, and an increasing fraction of the total number of recorded economic data is becoming accessible to interested researchers. Facts such as these make financial markets extremely attractive for researchers interested in developing a deeper understanding of the modelling of complex systems”. As an extension of this initial idea, Schinckus (2016, 2018) distinguished three ways of doing econophysics (statistical econophysics, bottom-up agent-based econophysics and top-down agent-based econophysics).
It could be expected that in applications of physics-based ideas in the research on economics, finance and social systems, as well as in econophysics and sociophysics, attention is given to the problems of identification of the numerical data. In more general applications of the ideas from physics in social studies, this problem is raised.
An introductory survey of the literature of econophysics and sociophysics shows that only some randomly selected issues are discussed. In econophysics and sociophysics, the problems associated with the identification of numerical data, or more precisely with measurement, are not studied sufficiently profoundly, especially when the mathematical sophistication of the models is taken into account.
When discussing the advantages of nonlinear models in studying economic phenomena, Jakimowicz (2016, pp. 902-903) observes that the degree of approximation of the research results to reality can be determined only with the use of a nonlinear, more precise model. Since most nonlinear models have no analytical solutions, object trajectories are calculated using numerical methods. The results take the form of numbers, and more precisely, they are time series, sometimes very long. Therefore, it turns out that even more precise nonlinear models do not have direct references to the real world.
Usually, the numbers in econophysics and sociophysics are treated as in classical physics, reflecting the properties of the objects of study, objective, independent from the observer. While it may be acceptable in some instances, such as the number of elements in social systems or their physical attributes, it is not always acceptable in finance, where money is ultimately a social construct. Even a more challenging problem emerges when studying social systems with characteristics constructed by observers, e.g. interactions between elements, borders of systems, structures, networks, changes of social systems, interactions between elements of organizations, and interactions between organizations.
Taking into account the above determinants, this study aims to identify the problems of measurement in studying economic, financial and social phenomena with ideas borrowed from physics applied in econophysics and sociophysics in particular. The following partial research aims are defined. First, the development of a survey of extant approaches to measurement in econophysics and sociophysics. Second, an introductory survey of fundamental ideas of measurement in a general sense. Third, identification of the specificity of measurement of broadly defined economic, financial and social phenomena treated as social constructs.
To limit the scope of this preliminary study, it is assumed that the depth of analysis is compatible with the middle-level theoretical approach. Additionally, due to the very character of the study, the following assumptions have been made:
1) Consequences of computational complexity are not considered.
2) No reference is made to the measurement problems and the role of the observer in quantum mechanics.
3) Constructivist and interpretive approaches to physical objects are treated in an introductory way.
4) The relations between the measurement of money and the measurement of the value of money are dealt with preliminarily. This issue demands separate, more profound considerations.
5) Due to their specificity, the cryptocurrencies are not considered and demand another study.
The study is based on a classical literature survey. Due to its introductory character and size, any computerized literature research is not needed at this stage of research. It may be added that research described in this chapter can be applied as an introduction to a more systematic approach to one of the aims of this study
Following the depth of semantic analysis necessary in this study, a few introductory remarks concerning differences between definitions and interpretations have to be considered. In this study, due to the stress put on social phenomena treated as social constructs, the term interpretation is used as the fundamental one.
In addition to the fundamental scientific curiosity and necessity, the inspiration for this chapter derives from a specific source. Frequently, publications, conference publications, and posters prepared by scholars, especially those young and competent in mathematics, physics, chemistry, biology, and related disciplines, include applications of mathematical models in studying various aspects of economics, finance, and collective social phenomena. Too frequently, when asked about the data and relevance of such models to economic and social phenomena, the authors respond that their interests are focused only on methodological aspects, including mathematical sophistication and intricacy of simulation models. Sometimes, the difficulties of collecting and processing the data, as well as the predictive power of such models, are exposed.
In the case of finance, such a simplified approach is at least partly justifiable due to the specificity of measurement therein. In the case of collective phenomena in society, such interpretations could be insufficient and even counterproductive. Often, when asked what is “social” in their models, the answers were supported with a simple enumeration of evident characteristics of social behavior in which the elements/actors depicted – humans as homogeneous non-conscious particles, simple first-order interactions.
Preliminary Bibliography
Abergel F. , Aoyama H., Chakrabarti B. K. , Chakraborti A., Deo N., Raina D., Vodenska I. , 2017. Econophysics and Sociophysics: Recent Progress and Future Directions, Springer International Publishing: Cham, Switzerland.
Ball, P. (2004). Critical Mass. How One Thing Leads to Another. (2004) New York: Farrar, Straus and Giroux. Kindle Edition.
Ball, P. (2006). Econophysics. Culture Crash, Nature, vol 441, 8 June, 686-688.
Bernstein, S., Lebow, R. N., Gross Stein, J. and Weber, S. (2000) God Gave Physics the Easy Problems: Adapting Social Science to an Unpredictable World March European Journal of International Relations 7(1):43-76.
Galam S. (2004). Sociophysics: A personal testimony. Physica A. Vol. 336 (2): 49-55.
Jakimowicz A., (2016). Econophysics as a New School of Economic Thought: Twenty Years of Research, „Acta Physica Polonica A”,Vol. 129, No. 5, s. 897–907.
Kutner, R.; Schinckus, C.; Stanley, H.E. Three Risky Decades: A Time for Econophysics? Entropy 2022, 24, 627. https://doi.org/10.3390/ e24050627
Mirowski, P. (1989) More Heat than Light. Economics as Social Physics, Physics as Nature’s Economics, Cambridge: Cambridge University Press
Mirowski, P. ed., (1994). Natural Images in Economic Thought: “Markets Read in Tooth and Claw”. New York/Cambridge: Cambridge University Press.
Schinckus, C. C. (2016). 1996–2016: Two decades of econophysics: Between methodological diversification and conceptual coherence. Eur. Phys. J. Spec. Top. 225, 3299–3311 (2016). https://doi.org/10.1140/epjst/e2016-60099-y
Schinckus, C. C. (2018), When Physics Became Undisciplined: An Essay on Econophysics, Ph. D. Thesis, Department of History and Philosophy of Science, Girton College, University of Cambridge, https://api.repository.cam.ac.uk/server/api/core/bitstreams/cc3d796f-c130-4f78-89ab-ed16301651ab/content
15:40
Paweł Sobkowicz
The Origin of Inequalities in Early Societies: An ABM Model
We present results of an agent based model, describing the origins of inequalities and social stratification in certain class of affluent hunter-gatherer societies. These societies, defined in the anthropological literature as transegalitarian, are characterized by availability of surplus resources and emergence of social inequalities, predating more complex structures found in sedentary agricultural communities. The relative simplicity of social structure allows to create a model, which focuses on a few key drivers of the process. These are: variability of certain individual characteristics (skills and talents, luck, greediness) and effects of the tendency for assortative matching. Our results provide insights into the relative importance of these individual and societal conditions in the appearance of stable stratification in initially egalitarian societies. Depending on the choice of which individual characteristics are more prized by the society (skills and contributions versus accumulated and used surplus wealth), the resulting structure may be more meritocratic or oligarchic.
16:00
Symposium closing
Meet the Organisers
The organisation and scientific integrity of the conference are ensured by the dedicated efforts of the Organising and Scientific Committees. Comprising experts from diverse fields, the committees have played a central role in shaping the programme, selecting contributions, and upholding the highest academic standards.
Scientific Committee
- Stanisław Drożdż
- Janusz Hołyst
- Ryszard Kutner
- Janusz Miśkiewicz
- Jan Sładkowski
- Pawel Sobkowicz
- Katarzyna Sznajd-Weron
Organizing Committee
- Tomasz Gubiec
- Janusz Hołyst
- Maciej J. Mrowiński
- Robert Paluch
- Grzegorz Siudem
- Paweł Sobkowicz


