from Hacker News

Artificial Intelligence: Foundations of Computational Agents

by rramadass on 3/16/25, 11:36 AM with 50 comments

  • by simonw on 3/19/25, 2:54 AM

    Because I collect definitions of "agent", here's the one this book uses:

    > An agent is something that acts in an environment; it does something. Agents include worms, dogs, thermostats, airplanes, robots, humans, companies, and countries.

    https://artint.info/3e/html/ArtInt3e.Ch1.S1.html

    I think of this as the "academic" definition, or sometimes the "thermostat" definition (though maybe I should call it the "worms and dogs" definition).

    Another common variant of it is from Peter Norvig and Stuart Russell's classic AI text book "Artificial Intelligence: A Modern Approach": http://aima.cs.berkeley.edu/4th-ed/pdfs/newchap02.pdf

    > Anything that can be viewed as perceiving its environment through sensors and acting upon that environment through actuators

  • by light_triad on 3/19/25, 6:47 AM

    Here's a few more definitions of agents:

    Agents are a coupling of perception, reasoning, and acting with preferences or goals. They prefer some states of the world to other states, and they act to try to achieve the states they prefer most (this book)

    AI agents are rational agents. They make rational decisions based on their perceptions and data to produce optimal performance and results. An AI agent senses its environment with physical or software interfaces (AWS)

    An artificial intelligence (AI) agent refers to a system or program that is capable of autonomously performing tasks on behalf of a user or another system by designing its workflow and utilizing available tools (IBM)

    Agents are like layers on top of the language models that observe and collect information, provide input to the model and together generate an action plan and communicate that to the user — or even act on their own, if permitted (Microsoft)

    Assumptions:

    Focus on rationality vs. goal-seeking vs. autonomy

    Whether tool use is emphasized

    Architectural specificity

    Relationship to users

    Decision-making framework (optimality vs. preference satisfaction)

  • by Peteragain on 3/19/25, 9:02 AM

    To add to the mix, agents are nominally proactive, rather than a tool wielded by someone. This (again nominally) means having goals, although the goals can often be in the observer's mind rather than the agent itself. Reasoning with goals is trivial for humans but the algorithms get hairy.
  • by antonkar on 3/19/25, 10:23 AM

    Intelligence - space-like, matter-like (LLM is a bunch of vectors, a static geometric shape, you just need memory to store it). It’s a static geometric shape. It can have analogs of volume, mass and density. The static 4D spacetime of the universe or multiverse is maximally intelligent but non-agentic.

    Agent - time-like, energy-like (you need a GPU to compute it). An agent changes the shape of the environment it operates in, including its own shape. You can count agents, their volume of operations, their speed of changing shapes (volumetric speed), acceleration… The Big Bang had zero intelligence (with maximal potential intelligence) but was and still is maximally agentic

  • by throwaway81523 on 3/19/25, 3:46 AM

    Book is from 2023, link should be edited for that.
  • by bbor on 3/19/25, 5:37 AM

    A) Looks really good, will be checking it out in depth as I get time! Thanks for sharing.

    B) The endorsements are interesting before you even get to the book; I know all textbooks are marketed, but this seems like quite the concerted effort. For example, take Judea Pearl's quote (an under-appreciated giant):

      This revised and extended edition of Artificial Intelligence: Foundations of Computational Agents should become the standard text of AI education.
    
    Talk about throwing down the gauntlet - especially since Russell looks up to him as a personal inspiration!

    (Quick context for those rusty on academic AI: Russell & Norvig's 1995 (4th ed in 2020) AI: A Modern Approach ("AIAMA") is the de facto book for AI survey courses, supposedly used in 1500 universities via 9 languages as of 2023.[1])

    I might be reading drama into the situation that isn't necessary, but it sure looks like they're trying to establish a connectionist/"scruffy", ML-based, Python-first replacement for AIAMA's symbolic/"neat", logic-based, Lisp-first approach. The 1st ed hit desks in 2010, and the endorsements are overwhelmingly from scruffy scientists & engineers. Obviously, this mirrors the industry's overall trend[2]... at this point, most laypeople think AI is ML. Nice to see a more nuanced--yet still scruffy-forward--approach gaining momentum; even Gary Marcus is on board, a noted Neat!

    C) ...Ok, after writing an already-long comment (sorry) I did a quantitative comparison of the two books, which I figured y'all might find interesting! I'll link a screenshot[3] and the Google Sheet itself[4] below, but here's some highlights b/w "AMA" (the reigning champion) and "FCA" (the scrappy challenger):

    1. My thesis was definitely correct; by my subjective estimation, AMA is ~6:3 neat:scruffy (57%:32%), vs. a ~3:5 ratio for FCA (34%:50%).

    2. My second thesis is also seemingly correct: FCA dedicates the last few pages of every section to "Social Impact", aka ethics. Both books discuss the topic in more depth in the conclusion, representing ~4% of each.

    3. FCA seems to have some significant pedagogical advantages, namely length (797 pages vs. AMA's 1023 pages) and the inclusion of in-text exercises at the end of every section.

    4. Both publish source code in multiple languages, but AMA had to be ported to Python from Lisp, whereas FCA is natively in Python (which, obviously, dominates AI atm). The FCA authors actually wrote their own "psuedo-code" Python library, which is both concerning and potentially helpful.

    5. Finally, FCA includes sections explicitly focused on data structures, rather than just weaving them into discussions of algorithms & behavioral patterns. I for one think this is a really great idea, and where I see most short-term advances in unified (symbolic + stochastic) AI research coming from! Lots of gold to be mined in 75 years of thought.

    Apologies, as always, for the long comment -- my only solace is that you can quickly minimize it. I should start a blog where I can muse to my heart's content...

    TL;DR: This new book is shorter, more ML-centric, and arguably uses more modern pedagogical techniques; in general, it seems to be a slightly more engineer-focused answer to Russell & Norvig's more academic-focused standard text.

    [1] AIAMA: https://en.wikipedia.org/wiki/Artificial_Intelligence:_A_Mod...

    [2] NGRAM: https://books.google.com/ngrams/graph?content=%28Machine+Lea...

    [3] Screenshot: https://imgur.com/a/x8QMbno

    [4] Google Sheet: https://docs.google.com/spreadsheets/d/1Gw9lxWhhTxjjTstyAKli...