Logical thinking stands in one corner, clean, confident, and structured—ready with sharp conclusions and clear paths forward. But across the ring stands something more elusive: the messy, layered, and unpredictable nature of reality (the quantum realm). Welcome to the journey into “things are not so simple as they seem, even if you logically concluded they are.” In this space, rules don’t always apply the same way, and conclusions don’t always lead to clarity. It’s not that logic fails—it’s that it reaches its boundary, and beyond that lies ambiguity, emotion, and influence that can’t be boxed into true or false.
Meet an imaginary Alex. Alex learned this the hard way. A champion of systems thinking and TOC, he was used to finding the constraint, fixing the flow, and watching everything improve. But when the bottleneck turned out to be a teammate’s silence, a subtle tension in the room, or a decision that felt wrong despite looking right on paper, logic gave him no answer. In his performance review of TOC, Alex simply wrote: “Great for machines. Needs a patch for humans.”
The Theory of Constraints (TOC) is a management philosophy developed by Dr. Eliyahu M. Goldratt, which focuses on identifying and addressing the most critical limiting factor, or constraint, that stands in the way of achieving a goal. The central premise of TOC is that every system, no matter how complex, has at least one constraint that prevents it from performing better in relation to its goal. By systematically improving that constraint, the overall performance of the system can be significantly enhanced.
The origins of the Theory of Constraints lie in manufacturing and production, but over time, its principles have been successfully applied in a wide range of industries including logistics, project management, healthcare, and education. The methodology encourages a holistic view of systems, treating them as interdependent chains rather than isolated processes.
A key element of TOC is the Five Focusing Steps, which provide a structured approach to improvement. The first step is to identify the system’s constraint, which could be a physical resource, a policy, or a market limitation. Once identified, the second step is to exploit the constraint by making the most effective use of the resource without incurring major investment. This may involve eliminating inefficiencies, scheduling more effectively, or removing distractions.
The third step is to subordinate everything else to the constraint. This means aligning all other processes and decisions to support the maximum performance of the constraint. Often, this requires a shift in mindset across the organization, as other departments may need to modify their goals or behavior to ensure the constraint is not hindered.
The fourth step is to elevate the constraint, which involves taking actions to increase its capacity. This could include investments in new equipment, hiring more staff, or redesigning processes. It is important that elevation comes only after exploitation and subordination have been fully implemented, as premature investment can be wasteful or misdirected.
The final step is to repeat the process. Once a constraint is broken or no longer limits the system, a new one will inevitably emerge. Continuous improvement depends on consistently returning to the first step and treating improvement as an ongoing cycle rather than a one-time effort.
Underlying the TOC is the idea that local optimization does not necessarily lead to global optimization. A system’s performance is determined not by how well each part functions individually, but by how well the system works as a whole. This principle challenges traditional performance metrics that encourage departments or individuals to maximize their own efficiency without regard for the larger system.
The Theory of Constraints is a natural extension of basic logical thinking principles—it takes foundational ideas like identity, non-contradiction, and consistency, and applies them dynamically to real-world systems. Just as logical thinking seeks clarity by focusing on what matters most in reasoning, TOC does the same within systems: it identifies the single constraint that limits performance and uses that as the focal point for improvement. So to extend us further – what are the logical basic principles?

Logical thinking is the process of reasoning consistently and coherently to arrive at sound conclusions. It is not about having the right answers, but about using valid methods to arrive at conclusions that follow from the information available. At its core, logical thinking is guided by a few universal principles that help ensure clarity, consistency, and reliability in thought.
The first principle is the law of identity. This states that everything is what it is; a thing is itself. In reasoning, this means that terms must be used consistently. A word or concept must mean the same thing throughout an argument. Shifting definitions midstream causes confusion and undermines logical integrity.
The second principle is the law of non-contradiction. Something cannot be both true and false at the same time in the same context. If a statement and its direct contradiction are both claimed to be true, there is a problem in the reasoning. This principle helps detect errors and prevent incoherence.
The third principle is the law of excluded middle. Between a statement and its negation, there is no third option; something must either be true or not true. While this principle is sometimes nuanced in complex systems or multi-valued logic, it remains a cornerstone of classical reasoning. It reminds us that hesitation to choose is not the same as the absence of a choice.
Another essential principle is consistency. Logical thinking requires that conclusions follow logically from their premises, without contradiction or arbitrary leaps. Arguments should be constructed so that if the premises are true, the conclusion must also be true. This form of validity doesn’t guarantee truth, but it ensures the reasoning holds together structurally.
Closely related is the principle of sufficiency of evidence. A claim must be supported by enough relevant information to justify belief or action. Weak or missing evidence can’t support a strong conclusion. Logical thinkers assess whether the evidence presented is adequate and whether it actually relates to the claim being made.
The principle of relevance is also vital. In a logical process, only relevant information should be considered. Distractions, appeals to emotion, or unrelated data may sound persuasive but do not strengthen an argument. Staying focused on what matters is key to clear reasoning.
Finally, logical thinking values clarity and precision. Ambiguity in language, assumptions, or structure weakens thinking. Clear definitions, transparent reasoning, and a structured presentation of ideas make it easier to evaluate and discuss conclusions.
I’ve always applied the basic principles of logical thinking—and the ideas behind the Theory of Constraints—across much of how I think and operate. They’ve given me structure, helped me focus, and allowed me to make better decisions, especially when things are complicated or urgent.
But one principle that’s always made me pause is the law of excluded middle. On the surface, it’s simple: something is either true or it isn’t. It demands clarity, decisiveness, and commitment. And in a lot of cases, that’s exactly what’s needed. It reduces the noise and forces a conclusion. But it’s also reductive—and that’s where I start to question it.
What really interests me is the grey area that exists around it. In complex systems, in science, in human behavior—truth doesn’t always live on just one side of a binary. Sometimes, more than one factor is involved, and they interact in a way that doesn’t directly change the outcome, but still influences it. Think of something that doesn’t carry information itself, but strengthens the signal so that the message gets through. That kind of influence isn’t captured by “true” or “false.” It’s not part of the main thread of reasoning, but it matters. It’s active. It shapes things.
This is where I find science really compelling—where logic begins to bump up against systems thinking, and where classical rules start to look like tools, not truths. I see logic as a foundation, but not a limit. Sometimes, insisting on a binary answer cuts off too much complexity too soon. And sometimes, acknowledging uncertainty or layered influence leads to deeper understanding.
In classical logic, the law of excluded middle asserts that every proposition is either true or false. However, in complex systems, this binary framework can be limiting. Fuzzy logic, for instance, introduces degrees of truth, allowing for values between completely true and completely false. This approach is particularly useful in systems where information is imprecise or where variables interact in non-binary ways. For example, in fuzzy set theory, an element can partially belong to multiple sets simultaneously, reflecting the nuanced realities of many real-world situations. This challenges the traditional binary perspective and provides a more flexible framework for reasoning in complex environments .
Moreover, in the realm of quantum mechanics, the classical law of excluded middle faces challenges due to the probabilistic nature of quantum states. Quantum logic, which has been developed to better model these phenomena, does not always adhere to classical logical principles. For instance, the superposition of quantum states implies that a system can be in multiple states simultaneously, defying the binary true-false evaluation. Studies have explored alternative logical frameworks, such as topos theory, to accommodate the unique characteristics of quantum systems, highlighting the need for more nuanced logical approaches in general thinking.
Imagine you’re standing in a room with a dimmer switch and a lightbulb. Classical logic would ask: Is the light on or off? The law of excluded middle demands a binary answer: it’s either fully on (true) or fully off (false). But now imagine that the dimmer is turned halfway — the light is glowing softly. It’s not fully on, but it’s not off either. The state of the system doesn’t fit neatly into “true” or “false.” That’s fuzzy logic in action: the light might be 0.5 true.
Now add a second element — background sunlight coming through a window. The sunlight doesn’t carry any “signal” in terms of your intention to light the room, but it influences how bright the room appears. If you slowly close the blinds, the same dimmed light appears brighter. The sunlight isn’t the message, but it strengthens or weakens the total signal. It doesn’t change the switch setting, but it changes perception. This kind of indirect influence doesn’t change the truth of the original statement (“the light is on”), but it changes the system outcome — how well you see. In this way, not every factor participates in a binary decision, but some still significantly affect the result.
To expand to the “quantum realm” – the human body is a profoundly complex system that stores, processes, and responds to vast amounts of information continuously—across informational, biochemical, neurological, emotional, and behavioral dimensions. Because of this complexity, assigning binary states like “motivated” or “healthy” as simply true or false fails to capture the fluid, layered nature of human experience. A person may appear motivated in one context but feel internally conflicted or emotionally fatigued in another, showing that motivation isn’t a switch that’s either on or off—it exists on a spectrum and often fluctuates subtly in response to internal chemistry, memory, environment, or even social dynamics. Similarly, health is not just the absence of disease, but a dynamic balance of physical, mental, and emotional states that can shift throughout the day. In this way, human states resemble quantum systems more than classical ones—where multiple influences coexist, overlap, and sometimes even contradict one another until observed or interpreted. Just as a quantum particle can exist in a superposition of states, a human can hold seemingly opposing states at once: calm yet anxious, tired yet energized, broken yet resilient. What we experience is shaped not only by hard physiological data but also by context, meaning, and subjective perception—factors that resist binary classification. To understand people, then, we need a logic that can embrace ambiguity, layered influence, and non-linear dynamics—something closer to quantum reasoning than to rigid either/or thinking.
The context I’ve underlined above is important as I move onto the next chapter of thinking—one that involves distinguishing between simple, not-simple, complicated, complex, and quantum systems. These categories help make sense of the nature of different systems and the kinds of reasoning or approaches that work best with them. Simple systems are predictable and repeatable—think of a light switch or a basic recipe. Not-simple systems add a bit more variety but remain easily understandable, like adjusting a thermostat or sorting data. Complicated systems, such as airplane engines or modern IT infrastructure, require expertise and analysis, but with enough effort and the right tools, they can be mastered and mapped. Complex systems—like economies, ecosystems, or GenAI—are dynamic, adaptive, and full of feedback loops; they don’t yield to straightforward cause-and-effect reasoning. You might need to think fuzzy logic here. Yet even with GenAI and other advanced technologies, models like standard logic, systems theory, or the Theory of Constraints still provide effective tools for navigating and managing those environments.
However, once we move into the domain of human behavior—how people think, feel, decide, relate, resist, or change—we cross into something more fluid and less predictable. Managing or understanding people cannot be done purely through complicated analysis or linear planning. Human beings exist in what I often think of as a quantum realm—where states overlap, where emotion and logic coexist, where behavior collapses into clarity only when observed in context, and where subtle influences (like mood, past trauma, or unconscious bias) dramatically alter outcomes without ever appearing on a flowchart. Here, conventional management frameworks start to fall apart. You can’t blueprint someone’s transformation. You can’t plan commitment, empathy, or resilience in a spreadsheet. People are influenced by meaning, perception, and invisible networks of emotion and memory. So while classical logic and structured methods are indispensable, when it comes to people, we have to reach for something else—something more intuitive, more aware of paradox, more comfortable with ambiguity. It’s not that logic fails, but that it needs to be expanded—to hold the uncertainty, the unpredictability, and the emergent nature of what it means to be human.

I’ve come to view the state of “context-and-relationship-driven not knowing what to do” as not only common but natural for managers—especially in environments that are fluid, complex, and socially dense. It’s not a failure of leadership; it’s often the default terrain of leadership. Over time, I’ve realized that traditional decision-making models fall short here. This isn’t about lack of skill or knowledge—it’s about the limits of linear thinking in nonlinear contexts.
I’ve found Karl Weick’s work on sensemaking particularly resonant. He emphasizes that in times of ambiguity, leaders don’t start with answers—they construct meaning through cues, conversations, and shared interpretation. It’s not about “solving” but about making sense of the evolving environment. (Weick, Sensemaking in Organizations)
Relational Leadership Theory also aligns closely with my experience. Leadership doesn’t happen in isolation—it emerges through interactions, trust, and mutual influence. It’s dynamic, contextual, and constantly negotiated. (Relational Leadership, SpringerLink)
What all of this tells me is that not-knowing is not just a phase to “get through”—it’s a space to work within. Instead of treating it as a problem, I try to approach it as a condition that calls for presence, dialogue, and flexible thinking.
And there is more to it. I’ve just recently written about the wider context in a piece that explores how preparation, gut feeling, and leadership come together—especially in unpredictable situations. In that article, “Leadership, Preparation and the Role of Gut Feeling – What Alesia Can Teach Us”, I reflect on how even the most experienced leaders can find themselves in moments of uncertainty where logic alone won’t suffice. It’s in those spaces—where context shifts rapidly and relationships carry hidden signals—that instinct, embodied knowledge, and narrative intuition begin to matter more. The point I make there is that not-knowing isn’t just a lack of information—it’s often the byproduct of being in a deeply human, interconnected environment where every move influences another. What we call “gut feeling” is not irrational—it’s the convergence of pattern recognition, lived experience, and a sensitivity to relational undercurrents. When traditional models fall short, these embodied responses become not just valid—but necessary. That piece ties directly into this reflection: we aren’t just managing tasks—we’re navigating meaning.
To conclude, what I’ve been exploring throughout this reflection is the idea that techniques—whether logical principles, the Theory of Constraints, or any management framework—are just that: tools. They offer structure, clarity, and direction, but they are not the full picture. Experience fills in what theory cannot reach. And while tools are essential, it’s the thoughtful, reflective, mistake-tested use of them that truly defines capability.
So yes, people can stay illogical—if by that we mean non-linear, intuitive, relational, or emotionally guided—and still achieve results. In fact, in some contexts, those are the very qualities that make results possible. The paradox is: the best leaders often blend logic and illogic, structure and instinct, knowing which to lean on at the right time. And sometimes, as we’ve hinted all along, the real intelligence is knowing when not to rely on it.
There isn’t a hard opposition between humanity and logical, rigid thinking; they operate in different layers. Logic gives us clarity, repeatability, and structure. Humanity brings context, emotion, contradiction, and improvisation. They’re not enemies—they’re just not interchangeable. Where logic maps, humanity navigates.
Which brings us back to Alex.
He wrote in his review of TOC: “Great for machines. Needs patch for humans.” And the uncomfortable truth is—there is no patch. No universal upgrade, no clean overlay that makes human complexity behave like a production line. Because what Alex was really reaching for wasn’t a better tool—it was an understanding that tools have limits. The “patch” he thought he needed wasn’t a fix for TOC; it was a shift in himself.
And to respond to Alex’s review, I have an answer of my own: I leave you with a thought that most systems—whether simple or complex, built on the laws of maths and logic—were, in the end, created by irrational humans. 😄
Leave a comment