from Hacker News

Ask HN: What happened to fuzzy logic?

by _448 on 5/25/23, 9:37 AM with 75 comments

Fuzzy Logic was a very talked about field few decades ago. What is the state of that field of research? Is it still being persued or are there applications of it already implemented commercially?
  • by currymj on 5/26/23, 8:00 AM

    I don't endorse the following as an accurate history, but it is a narrative that is taught within machine learning and probably has some elements of truth to it.

    If you want to take a system that's working on Boolean logic, and introduce uncertainty, replacing true/false with probabilities does a great job of this. However in the 1960s/1970s, people believed it was hopeless to have AI systems use probability to deal with uncertainty. This is because probability requires you to use Bayes' Theorem, and computing the denominator in Bayes' Theorem requires summing over an exponentially large number of different outcomes.

    Thus people came up with all kinds of alternative systems to avoid dealing with probability. Among these would have been fuzzy logic.

    However people came up with ways to cope with the computational intractability (belief propagation on Bayesian networks, better Markov-chain Monte Carlo algorithms, etc.), so probability became practically viable. And on the merits, if you can deal with the computational issues, probability seems to be much nicer than these other formal systems. So since the 1980s/1990s the probabilistic approach to AI has become dominant (even if deep learning has displaced the actual models).

  • by ojosilva on 5/26/23, 11:06 AM

    I've built digital systems based on Fuzzy Logic in the US sometime in the mid 1990s. Fuzzy was good because it allowed us to implement very complex decision making (outcomes, classification...) on very, very basic hardware. It was, in all forms, a simplification of a neural net which was much harder to implement hardware-wise.

    Fuzzy didn't go very far because it could already be implemented back then with cheap processors (we had Moore's law!) and some slap-on programming. Or domain-specific logic hardwired by engineers who knew what the outcome should be given a bunch of inputs: think "balancing robot" or washing machine torque control.

    Modern ML and powerful Arduinos make fuzzy logic irrelevant today, except that Markov chains should also be irrelevant but they aren't, so who knows, maybe fuzzy could stage a come back. Despite its huge limitations, FL certainly is better than current neural nets when it comes to transparency so that humans can better visualize what it's doing. Disclaimer: back in the day I built a "FL tracker" in Visual Basic so that you could visualize the full system trickle-down a decision in realtime and it was very enlightening. Today I'd love to build the same visualizaer but for LLM generation!

  • by nabla9 on 5/26/23, 8:03 AM

    (positives:)

    Fuzzy Logic has some utility in engineering applications, because it can be intuitive for humans interpret and to create rules using it. Interpretability is also the strong point of neuro-fuzzy logic (hybrid system). Applications are usually control systems, business logic, or embedded systems with tight resource constraints.

    (negatives:)

    In academic research it's not that interesting, because it's just one multi-valued logic among others, subset of infinite-valued logic. It's not very useful in machine learning because there are no efficient way to learn fuzzy rules automatically.

  • by h2odragon on 5/25/23, 10:38 AM

    Pratchett's "Wooly Thinking" was the 2nd version of the idea, much more popular, took over the world immediately after he coined the phrase.

    Thruly, tho look at things like QAM encoding schemes and the tricks stepper motor controllers are doing. We have found ways to analog-ize a lot of things that were formerly viwed as totally digital; which i think is what the "fuzzy logic" hype boiled down to anyway.

  • by dollo_7 on 5/26/23, 7:44 AM

    Academic silo with little to no real transfer to business. ML eventually enabled building better continuous and discrete models for inference, control, and prediction.
  • by segmondy on 5/25/23, 7:12 PM

    It's took the form of neural networks and deep learning.
  • by captainmuon on 5/26/23, 9:19 AM

    Fuzzy logic always seemed misguided to me. You're just calculating values with real arithmetic instead of booleans. It's not a whole different kind of mathematics.

    When I'm drawing on screen and placing a widget not on the left or the right, but at x=f*width, is that already fuzzy logic? The classical examples of Japanese washing machines determining detergent dose are not much more complicated.

  • by PinguTS on 5/26/23, 9:50 AM

    Coming not from the background of IT but from electrical engineering and industrial control systems, like programmable logic controllers (PLC) and embedded control, I learned that 1990ish.

    Fuzzy logic was in favor to the relative new field of neural nets (ML as we know it today). Because from the limitations in industrial control neural nets were too costly. Yes, we had already things like the first i386embedded. But that was way too expansive. Most of the PLCs were based on 16-bit microcontrollers and even those were more on the expansive side.

    So if you think commercially in terms of industrial control systems, then Fuzzy Logic was implemented and used. I assume old systems can be found anywhere even today. Remember that in industrial control 20 years is no time.

    Today, everybody in industrial control is also about ML. But here again, that is still on the expansive side. Such Nvidia Jetsons do not come for cheap.

  • by wrp on 5/26/23, 9:20 AM

    Although fuzzy logic was developed by Lotfi Zadeh in the USA, it never gained the level of interest and respectability here that it did in Japan and Eastern Europe. I remember American engineers saying that fuzzy techniques were things they already understood and were doing under different names. They saw no benefit in adopting a new conceptual system.

    Among the more philosophically inclined, there was controversy other whether fuzzy logic was really something new or just a repackaging of old ideas with a misleading vocabulary. There was a book on how this debate played out in journal articles, but I can't think of the title now.

  • by vivegi on 5/26/23, 9:02 AM

    IEEE Spectrum's profile on Lotfi Zadeh and the Birth of Fuzzy Logic. https://spectrum.ieee.org/lotfi-zadeh
  • by ly3xqhl8g9 on 5/26/23, 8:08 AM

    Contrary to popular belief, perhaps we are not yet ready for fuzzy logic-based mathematics†. Like asking that does Boolean logic do in the 13th century, before Leibniz had any glimpse of characteristica universalis [1] (software, as opposed to calculus ratiocinator, hardware). If cryptocurrencies are solutions in search of a problem, fuzzy logic, and especially fuzzy dual numbers logic [2] [3] [4], doesn't even know it is a solution.

    † one speculative use would be to assign degrees(?) of agency, some kind of a future unit of measurement: a person can be 100% evolved, or maybe 49% evolved and 51% designed (prosthetics, neural implants, and so on).

    [1] https://en.wikipedia.org/wiki/Characteristica_universalis

    [2] 2017, Fuzzy Dual Numbers: Theory and Applications, https://dl.acm.org/doi/book/10.5555/3172931

    [3] "Fuzzy Dual Dynamic Programming", https://hal-enac.archives-ouvertes.fr/hal-01372543/document

    [4] "Dual Numbers & Automatic Differentiation", https://blog.demofox.org/2014/12/30/dual-numbers-automatic-d...

  • by tpoacher on 5/26/23, 12:20 PM

    There are two things that are referred to as "fuzzy logic".

    The first, is the true fuzzy logic, which consists of Fuzzy Set theory and Fuzzy Measure theory. The two are distinct but interrelated, and heavily mathematical in nature. In particular, Fuzzy Measure theory completely subsumes Probability theory (i.e. probability theory is a strict subset of fuzzy measure theory, and therefore anything that can be expressed in probabilistic terms can be expressed in equivalent fuzzy measure theory terms, with leeway for added generality).

    The second, is a class of techniques, based on Fuzzy Set / Fuzzy Measure theory, which implement some very simple behaviours which allow you to solve a particular class of problems. This is the standard Sugeno/Mamdani logic, fuzzy inference as the application of a few straightforward rules for conducting conjunction/disjunction among fuzzy rules.

    The first is still thriving, but largely in the theoretical realm.

    The second is shunned, on the basis that it feels like a bunch of interpolation heuristics, and a Bayesian approach is usually "better", even if not as intuitive of straightforward.

    Unfortunately this has had a self-prophetic impact, in the sense that any literature which would have been better described as fuzzy, is labelled as probabilistic, even if strictly speaking the concepts involved are not strictly-speaking probabilities, but should more correctly have been addressed as fuzzy measures.

    Personally I have published papers where I had to "disguise" fuzziness as probability just to get it considered for publication, or where I had to "apologize" for the term, or making it explicit that fuzziness in this context is unrelated to this set of tools that is typically thought of as "the fuzzy approach".

    Having said that, one area that is thriving is the use of Fuzzy methods in the context of more general AI / NeuralNetwork methods, particularly in the context of explainability. Type 2 Fuzzy methods in particular (which introduce one or more layers/dimensions of fuzziness/uncertainty over the membership function itself) are quite an active area of research.

    Finally, there are some frameworks that borrow heavily from fuzzy theory, but strictly speaking are independent. The one that interested me the most in recent years was "Subjective Logic", proposed by a guy called Audun Jøsang; this sounded like a very interesting logic framework and I was very keen to use it in my work, but in the end I just didn't have the time to justify that investment ... It's worth looking up though. The basic idea is that you have a framework where you have rules that have a built-in way of carrying uncertainty with them, with a one-to-one mapping with beta probabilities over your fuzzy-like rules, and a set of logical operations (which map to the usual logical operators) that are adapted to carry the uncertainty with them. This means that you can reduce your problem into a set of simple logical statements, and the framework would then take care of the uncertainty calculations for you seamlessly.

  • by ndsipa_pomu on 5/26/23, 9:12 AM

    Reminds me that I played around a tiny bit with Fril back in the day. Prolog with probabilities/uncertainties

    https://en.wikipedia.org/wiki/Fril

  • by regularfry on 5/26/23, 8:33 AM

    It always struck me as an interesting way to get out of having to do a bunch of horrible control theory calculus and still get credit for it.
  • by re36081240 on 5/26/23, 12:15 PM

  • by born-jre on 5/26/23, 9:52 AM

    lol, i did not know fuzzylogic was seperate field, thats what i call ml/ai . I also named one of my repo fuzzylogic [0].

    [0]: https://github.com/temphia/fuzzylogic

  • by pestatije on 5/25/23, 9:44 AM

    yes, washing machines...you put your dirty clothes in and just press start...the fuzzy logic works out the best program for it
  • by revskill on 5/26/23, 7:42 AM

    Fuzziness means illogic ?