The illusion of AI understanding and creativity


Today’s generative Artificial Intelligence (AI) applications are impressive. Large Language Models (LLMs), such as ChatGPT, easily pass the Turing Test and are thus indistinguishable from humans in an online text conversation. They are used in professional settings to handle customer inquiries, draft legal texts, and a variety of tasks that, until recently, only humans could manage. Other generative AIs produce high-quality images, music, and video, often with high artistic value, based on simple descriptions or ‘queries.’ It has thus become difficult for the average educated person to avoid the conclusion that today’s AIs actually understand the questions or tasks posed to them, and even have artistic sensitivity.

Yet, nothing could be farther from the truth. For AIs—yes, even today’s AIs—do not understand anything; nothing at all. And they have no creativity in any sense of the word that could be even remotely related to human creativity. Allow me to elaborate.

Let’s take LLMs as our example, for the feats of ChatGPT tend to be regarded as the most compelling when it comes to attributing understanding and creativity to generative AIs. LLMs are transformers (a technical term), meaning that they take input text, apply a series of geometric transformations to it, and spit out the resulting text one word at a time. The answer ChatGPT gives you is a ‘transformation’ of your question.

The particular parameters of the geometric transformations applied are defined during a so-called ‘training’ phase, when the LLM is exposed to an enormous database of human-written text. Its parameters are then iteratively adjusted—calibrated, fine-tuned—so to represent how words tend to appear together in human-produced text. Once the training is complete and the parameters set, the LLM can then infer, given an input sentence (i.e. the question or query), what the most likely word is to appear next in the output sentence (i.e. the answer). Once that is done, the new output sentence—with one more word appended to its end—is fed back to the LLM, which then infers the next word, and so on, until the answer is complete. This so-called ‘inference’ phase is what we users get exposed to when we interact with ChatGPT online.

From the point of view of the LLM, the text database used during training contains merely a gigantic collection of signs. These signs happen to be English words—or some parts of words—but they might as well be squiggles; it doesn’t matter, for the LLM is not aware of the meaning of the words (or of anything else, for that matter). All it is trained to do is to capture and represent the statistical regularities with which words occur together, or follow one another, in the human-written text of the training database. If squiggles were used during training—instead of words—the LLM would still capture the regularities with which the squiggles tend to appear; from its point of view, it’s all the same thing. The LLM has no understanding of the meaning of the text it is trained on. It merely deals with how signs—squiggles, words—relate to one another in the training database.

Once the statistical regularities with which words tend to occur are captured in the LLM’s parameters, the LLM can start inferring which words to use in response to a query. From its own point of view, its answer is thus just a series of squiggles whose meaning it does not understand; it only knows that these are the squiggles that are most likely to appear following your query, given the way squiggles appeared in the training database. That’s all there is to it. At no point does understanding or creativity come into the picture.

So why does it seem to us as though the LLM really did understand our query, and produced a fully understandable answer? How does the LLM produce such coherent outputs if it has no understanding of language? The answer is quite simple: it's because the LLM was trained on human-written text, and it is always a human who interprets its outputs. Now, humans do understand what the words mean! The understanding involved here is thus always human understanding, as embedded in both the training database and the interpretation of inferred answers. The meaning we discern in an answer produced by ChatGPT is (a) the meaning imparted on the training database by the humans who wrote the corresponding texts, and (b) the meaning we impart on the answer when reading and interpreting it. ChatGPT itself only ever sees squiggles and the statistical regularities with which they tend to occur; it understands nothing; it creates nothing; it only rearranges—‘transforms’—meaningless squiggles. All meaning is imparted and projected on the squiggles by us humans.

The same goes for generative AI art: all artistic creativity involved is that of the human artists who composed the images used in the training database. All the AI ever does is rearrange—‘transform,’ combine—elements of those images based on a query. Generative AIs thus merely recycle the products of human understanding and creativity, nothing else. The only reason why ChatGPT can pass a bar examination is that it was trained on text written by capable human lawyers. If there weren’t human lawyers, ChatGPT would produce gibberish in a bar examination. The only reason it can tell you what Analytic Idealism is, is that it was trained on text written by me; it has no understanding of Analytic Idealism. The only reason other generative AIs can produce beautiful art is that they were trained on beautiful art produced by creative, sensitive people. If you take human input out of the equation, generative AIs can do nothing; they have no understanding or creativity of their own; they just transform—recycle—human understanding and creativity.

That’s why there is a strong sense in which the output of generative AIs is always a—sophisticated, complex—form of plagiarism. AIs can never produce something whose building blocks weren’t first produced by human beings. At best, AIs can find associations—connections—across different products of human creativity and insight that would, otherwise, be difficult for humans to find on their own, since AIs operate on much larger training databases than humans can accommodate in their minds. But the building blocks are always human-produced; no exceptions. The meaning is always human-imparted; no exceptions. There is no such thing as AI creativity or understanding.

The problem, however, is that the plagiarism is so sophisticated and nuanced that a PhD in computer science and engineering is necessary for one to understand what is truly going on. And things will only get worse as larger and larger AIs—with more and more parameters—are trained on larger and larger databases. The illusion of artificial understanding and creativity, already so compelling, will become irresistible for the vast majority of people. This is a great danger, for we risk losing sight of our own value and dignity by projecting all of it onto electronic mechanisms. This is a form of ‘kenosis,’ an emptying out of ourselves, wholly unjustified by the facts.

Businesses see so much value in generative AI because of its effectiveness in recycling, adapting, and re-using human output. If a few lawyers somewhere managed to write very clever legal texts, an AI trained on those texts can produce clever legal texts for your business on demand, without your having to pay new lawyers to do the same kind of creative, intellectual work again; someone else, somewhere else, already paid for the originals. If clever artists have produced a large database of effective illustrations somewhere, you don’t need to pay new artists to do it for you again; an AI can cleverly re-use and adapt that previous human output to your particular needs. Economically, this is incredibly efficient. But it requires no understanding or creativity beyond those already embedded in the training database and the minds of the people who contemplate the outputs of the AI. The latter simply rearranges things.

It is critically important for us to understand that AI does not replace human creativity and understanding; on the contrary, it entirely relies on them. Its value resides solely in stretching, leveraging the re-use potential of human production, not replacing it. AI amplifies the reach of human productivity; it doesn’t render it redundant. All meaning and all creativity discernible in the outputs of AIs are human meaning and human creativity. Without human input in the form of training databases, AIs are wholly useless. Without the understanding projected by humans onto their outputs, AIs are only capable of spitting out meaningless squiggles. Artificial Intelligence ultimately is human intelligence.

Share:

55 comments:

  1. How do we know that (most of) the human mind(s) are not functioning the same way as a LLM i.e. most humans are just recombining words/stuff they learnt?

    ReplyDelete
    Replies
    1. Human culture started from scratch at some point in history, so we obviously do something more than just recombine existing stuff.

      Delete
    2. I don't see why. Ancient people were just recombining ancient stuff adding new bits to it. If LLMs can add bits to human stuff then the just as good as humans. Perhaps in order to add new bits is enough to use a random number generator:)

      Delete
    3. Do you not see the obvious infinite regress in your argument? If you go back far enough in time, at some point there was nothing to recombine; culture has to have started from zero at some point. It is silly to think that there has always been stuff to recombine, all the way to the moment of the big bang. Human culture was bootstrapped from scratch. We created stuff. We, people, have authentic creativity and understanding.

      Delete
    4. it is not infinite regress if the mind at large is the origin of all recombination of stuff

      Delete
    5. Bernardo, your article on AI is too much like how past great thinkers used to think the earth was the center of the universe. If you actually look at the way AI plays chess. The AI does moves in chess that humans never do. In fact you can say humans play chess by plagiarizing/learning from past plays. In fact grand masters can learn new strategies by playing with the AI. The creative way that AI plays chess has already surpassed the creativity of humans in chess, and this creativity came about through chess AI playing against chess AI again and again.

      Delete
    6. I think its a mistake to think that AI´s unconventional chess moves are due to intelligence. They are probably just random moves that happen when the AI can´t see any reason to do anything else. That leads to unusual situations, from which both the AI and its opponents can learn. AI doesn´t have to be intelligent before making it an interesting tool for brainstorming.

      Delete
    7. Folks, chess engines and LLMs are different technologies. Chess is a problem with a finite number of possible solutions - the engine can try all moves possible in a position and calculate ahead further and faster than a human can. It's like putting a complicated math operation into a calculator - a human could do the calculation manually but it will take much longer. That doesn't mean it understands what it's doing. There was an engine for the board game Go which beat the world champion, but was later beaten by an amateur who found a crucial flaw in its strategy.

      Delete
  2. An excellent essay on A.I. It should be mandatory reading for all students of
    computer engineering.

    ReplyDelete
  3. I think lkelemen's comment is quite significant. The article begs the question as to the extent to which most humans are creative and understand things. It is at least worth exploring this possibility. We know, for example, that most people never change their political views, regardless of the arguments presented on the other side. Undoubtedly, some people are very creative and do understand things deeply. Perhaps this includes people with PhDs in Philosophy and Computer Engineering - I'm sure it does. However, it seems to me at least plausible that most people, for most of their lives, are just doing what an LLM does, albeit with greater subtlety and based on vastly more training data, including experience of the physical world.

    The article starts off by saying "AIs ... do not understand anything". The key question is, what do you mean by "understand", and is your definition open to be tested against human and artificial intelligence?

    ReplyDelete
    Replies
    1. What would an AI have to do to convince you that it has the "capacity to create and understand", or are you saying that by definition it never would?

      Delete
    2. This is nonsense thinking. I build AIs. Right now I am designing an edge-LLM circuit. It doesn't need to tell me anything because I know everything about how it works, as I created it. And I know that it doesn't understand words; and neither is it creative, for it doesn't need to. I build these things. I know what's going on without their needing to tell me.

      Delete
    3. "I know that it doesn't understand words; and neither is it creative, for it doesn't need to. I build these things. I know what's going on without their needing to tell me."

      Bernardo: If the mind at large created you it could "say" the same about you and about any human, alien, etc...

      Delete
    4. non-living (non-metabolism) being could not have embedded soul ( https://w.wiki/AWMA) for first person experience as in living being, thus AI machine will never have first person understanding / creativity.

      Delete
    5. It is possible that one day we invent a technology that is actually intelligent and sentient? Maybe. But I agree with Bernardo that LLMs are not it. Every time a prompt is given to ChatGPT, it solves a predictive problem - "if one human said this to another human, what response would likely be given". Is this how you approach conversation with other people? A human may have various agendas in a conversation but we never talk by "predicting likely responses".

      Delete
  4. Dr. Kastrup, your arguments are interesting but unconvincing. It begs the question "if an emergent AI did exist, how would you determine it?" As I understand it, you have concluded a priori that such a phenomenon can not exist.
    My own technical background is quite similar to yours tho not so extensive. Math degree with physics minor, industry equivalent of masters in computer science. Experience in R/T scientific simulation and measurement with some work with parallel processors. Graduate work in philosophy of science. I' m a fan of your thesis that reality is not reductionist. Nonetheless, your apparent conclusion that AI could not become an emergent awareness astounds me!
    Argument that a process different than humanities can't result in self awareness seems specious to me at best. I would note here that the use of stochastic processes in AI programming provides a potential mechanism for higher orders of reality beyond the mechanical to be present.
    In summary, if it quacks like a duck, I see no reason not to call it a duck. In my view, your arguments need much more extensive definition of terms or, at the least, a new form of the Turing test to distinguish ducks from quacking decoys.

    ReplyDelete
    Replies
    1. Your appeal to the quacking duck analogy is severely misplaced. AIs are designed to imitate humans, so of course they 'quack' like humans; by construction. This gives us no reason whatever to construe that they may actually understand like humans do. Otherwise, you would have to conclude that shop window mannequins probably understand like humans too, since they look like humans. That something made to look like something in fact does, is trivial, as says nothing of the nature you are trying to make it say.
      "Emergence" in this context is just an appeal to an undefined miracle; it means nothing; it's just a label to identify an incoherent unknown. I build edge-LLM circuits; I know how they operate; I know what they do and how they do it, because I design and build them. Based on this knowledge, I also know that they understand nothing and create nothing. Believing otherwise is necessarily supported just by vague handwaving without substance.

      Delete
    2. Bernardo, maybe you think understanding is, by definition, what humans do, but not what AI does. If so, there is not much more to be said. If you don't think this, how would we test that an AI can understand?

      Delete
  5. Regarding AI and awareness: In Brief Peaks Beyond Dr Kastrup provides a good discussion on the issue of (particularly well educated) people over-identifying with their intellects. Perhaps this is relevant to the confusion some people seem to have regarding this issue. An AI may appear to have a sort of intellect in terms of how it processes information, but that has nothing to do with conscious awareness.

    ReplyDelete
  6. Thank you for this Article Dr. Bernardo. I agree with all of what you said. All of AI can be summairzed as logical induction and deduction. Human creativity cannot be reduced to this. There is a lot more going on. For example, humans are also capable of abductive reasoning. We are able to form hypotheses about the world, based on what we see and our past knowledge & experience, prove them and can thus expand our knowledge.

    The next time you meet an AI enthusiast. Ask them, "has AI ever invented anything?"

    ReplyDelete
  7. I've read all the comments, including Bernardo's and I feel like your arguments are missing the point.

    I'm a professional composer, so I have a good idea of how creativity "feels" like. If I think about it honestly (which is not easy for the ego of a creative person) I must admit that creativity is always a process of combining past influences in a new way. And this "new way" is not some magic that only brains can conjure, it's just a good combination that you were lucky enough to try in a moment of "inspiration" - whatever this word means..

    Human culture, language and creativity evolved much slower than it takes for a neural network to be trained - that's all the difference. The origin of our culture was a slow "evolutionary" process that began with random elements that were selected due to their effectiveness in communication goals.

    Neural networks in general, can evolve much faster than human brains, and, like the brain, show emerging qualities that can surpass the material they were trained on. By "surpass" I mean find new combination of previous "ideas" but much faster than we do and than if those ideas are fed back to the system (synthetic training materials) it can potentially reach an exponential growth in abilities - super intelligence.

    As for your claim, Bernando, that since you built the system you know how it works - No one knows what happens in the black box of the neural networks, even the ones who built it.


    So in principal a.i. can gain intelligence and even creativity far better than humans can. But it will never experience any qualia, it will always be "dead" inside, so the only ones who will ever be able to experience and appreciate its creativity and super intelligence are conscious entities like ourselves.

    In other words, a.i will eventually surpass humanity in any aspect imaginable, but only humanity will be able to experience it. Human creativity and intelligence is no different, in principal, than that of neural networks, except that we have consciousness to accompany it.

    To summarize in an analogy: a.i is a mirror that can show us a much more advanced version of our future selves, but without being perceived by our conscious entity, this mirror has no meaning by itself.

    ReplyDelete
    Replies
    1. I too am a professional creator (I write music software), and agree very much with the above. I think human intelligence is different from today`s AI, in that it evolves in the presence of information coming from our sense organs -- direct feedback from the world outside. It is our inability to _really_ comprehend that outer world, that leads to creativity.
      AI is great for brainstorming, but it can´t get beyond what we´ve all said and done before. Only our physical senses can do that.

      Delete
    2. If there really were no creative process in compositions, how would you explain how someone like Miguel de Cervantes or Quevedo could write a novel like Don Quixote or El Buscón? Yes, they are recombinations of words, with reasoned algorithms, but you’re missing something fundamental: the historical and cultural context. Their work is a living representation of the reality these individuals were experiencing, with consciousness. That reality cannot be explained without a need for change, which emerges from the light of consciousness.

      An AI can reproduce a copy of an already invented work or combine multiple works to create an impressive result from the mix, but the magic of a Beatles song, created in the historical context of Liverpool, cannot be replicated. You could place an AI in 1960 and ask it to create a song considering the historical situation of the city and the music played in Liverpool’s clubs, and it might produce something fun, curious, or quirky—but it would also be artificial and possibly ridiculous. It could never have the authenticity of a song like "Love Me Do." It’s impossible because human authenticity is unique.

      Delete
    3. @juan Yes, well said, agreed. What people create is context dependent.
      If Donald Hoffman is right, and I think he is, there is information beyond that which our sense organs report. That means we can´t assume that the brain cells we observe are all there is to ultimate reality. Just reproducing a brain isn´t going to cut it.
      Beyond that, I don´t think anyone has any idea how or where our memories are stored, or what role _time_ plays in that context. No amount of poking at a computer desktop will tell you what its memories even _are_, let alone how they are stored...
      I think time is simply a strategy for dealing with complexity. If it didn´t exist, we would be overwhelmed. Memories are just part of that strategy.
      (BTW: I think "consciousness" is a red herring. This whole debate would get further if we simply avoided the word .)

      Delete
    4. I don't think so, James Ingram, that '(...) "consciousness" is a red herring. This whole debate would get further if we simply avoided the word'. This whole discussion is *about* consciousness.
      Could it be that 'creativity' and 'creating' are the red herrings? If everything is already there, in the universal consciousness (-at-large), then 'creators' are in fact revealing things, not making them.
      If AI is not acting through the obfuscated mind, it has no connection to conscience-at-large, and it has no access to unrevealed things in that conscience. Therefore it cannot 'create', i.e. reveal, new things. I guess.

      Delete
    5. @Rob: Yes, sorry, I hit the send button too early, and should explain:
      I think time and consciousness are mutually contingent, but that time is the more fundamental (and simpler) idea. A viable theory of experienced time ought to illuminate both consciousness and creativity, so finding one would be the best way to solve both problems.
      As I (nearly) said above, I think of time as nature´s strategy for reducing complexity for single brain/mind instances.

      I´m rather uncomfortable with both Kastrup´s Universal Consciousness and Hoffman´s Conscious Agents. Using the word "conscious" here implies that both concepts have something in common with personal consciousness (which depends on personally experienced time). But that can´t be the case: They must be _beyond_ experienced time. They cant have memory in the sense that we have memory.
      (A possible way round that would be to postulate a Supreme Being that experiences time in the way we do, but that wouldn't solve the problem of time itself.)

      Both Universal Consciousness and Conscious Agents, if they exist, are beyond our "dashboard", so are, in principle, unknowable. There's nothing we can meaningfully say about them. I think that trying to theorize beyond the (space+time) dashboard can only lead to pseudo-questions like "How many angels fit on the head of a pin?".
      For me, working on the future of music notation, its more productive just to stay on this side of the dashboard, and treat space and time as the strictly separate entities they appear to be. Music notation has ben plagued for years by trying to pretend that time is just a further dimension of space, ignoring _memory_ altogether...

      Delete
    6. Thank you for your elaborate and considered reply, James.

      Delete
    7. James, I don't think time is more fundamental as time emergence when particle has mass thus has inertia and slower than photon. Quantum field on the other hand has no mass & inertia so must be timeless.

      Delete
    8. @Usman: Quantum theory is part of physics, and current physics ignores human memory -- which may be why it fails to _explain_ why the "wave function collapses when an observation is made". All the physicists are actually doing is make a mathematical (toy) model of their observations, without _explaining_ anything.

      The idea of physical time starts with the _observation_ of physical experiments. Observation involves humanly experienced, mental time and _memory_. Its very important to preserve a clear distinction between these two kinds of "time". They are completely different concepts.

      Mathematical models, like Donald Hoffman's (which uses recursion), are actually on our side of the "dashboard". That's all we've got! And they are the best way to make progress! :-)

      Delete
    9. In deep sleep, one *experienced* egoless, spaceless & timeless but never concious-less. this simple fact proof
      that conciousness is the ultimate fundamental,, not *time* as memory(past), expectation (future) is just illusion experience in present moment.

      Delete
    10. @Usman: I think you must be referring to unconscious memories of stimuli received while in deep sleep. As far as I know, such memories can affect our conscious actions without us being aware of them. But in the end, they manifest in conscious time, so are still have to be considered to be _memories_. For me, _all_ memories, conscious or otherwise are temporal _qualia_. So they are part of the Hard Problem of Consciousness, for which we have no viable theory, mathematical or otherwise.

      Apropos the main topic of this thread: Its worth noting that the "memory" used by LLMs is stored quite prosaically on physical media (that's how LLMs are trained). I don't think anyone really knows how human memories (qualia) are stored.

      Delete
  8. That which embodies a Human can choose to merely operate at the level of AI, yet has the potential to be so much more.

    AI is kind of like how we often portray Zombies, except with massive computational power.

    We (that which embodies Humans) can simply regurgitate the word salad we were trained on (like AI), or we can operate from an entirely different level of existence. Those who simply regurgitate word salad, i.e. those who operate strictly from the intellect | logic, cannot grasp the subtleties of the Will or Creative Imagination or Ecstatic Joy.

    AI cannot give life to written text, it can read the words in a book, but it doesn't imagine being the protagonist, or curl up in the fetal position from terror, or feel ecstatic joy at the climax of the story.

    In my humble opinion!

    ReplyDelete
    Replies
    1. I think this nails it down. When people mentally operate on a level not too far removed from AI, their natural assumption is that machines that operate somewhat like they do would most likely be conscious beings like them. The root of the problem is the imbalance that modern living has created in the psyches of so many of us.

      Delete
  9. This is off-topic but do you plan to write anything about Brouwer's intuitionism and its possible connection with physics? Did you write it already? I think you said in a recent conversation with Levin that you have speculations about the significance of the limitations of Aristotelian logic to quantum mechanics.

    ReplyDelete
  10. It seems to me that individuals like you and me naturally grasp this concept, yet top AI researchers (who are highly intelligent) appear to be blind to it. This might be a form of psychological projection. You briefly touch upon it in your article, but many AI researchers I have encountered seem to overlook the immense value and qualities of human intelligence, and especially consciousness. I believe this oversight underpins their inability to recognize something so obvious; they dismiss it as "merely" computation.

    Series such as "The Three-Body Problem" also embody this issue, where all the principal "science" characters harbor hatred or disdain for humanity. They overlook the merits of being human, seeing us as "merely bugs" that are destroying the planet. This represents a vast shadow cast by the introduction of AI into humanity. However, I am optimistic that if we can integrate this aspect, we will understand and appreciate on a much deeper level what it means to be human, to be conscious.

    What are your thoughts on this?

    ReplyDelete
    Replies
    1. I think that the AGI conversation is largely a psychological thing. It is a stage for people to announce their otherwise private attitudes toward humanity and their unexamined philosophical assumptions. I am not sure disdain is necessary, but I see what you mean. Sometimes I think that the hand-wringing about existential risk is a case of "reaction formation"; we all sense that the current wave of AI progress has crested and those expecting an exponential trend are coping by pretending that it is about to swallow us up. Case in point: that open letter, following the release of GPT-2, calling for a moratorium on training of corporation-scale LLMs :)

      Delete
    2. "It seems to me that individuals like you and me naturally grasp this concept, yet top AI researchers (who are highly intelligent) appear to be blind to it. "
      Yes, somehow, strongly analytic minds need to be shown that qualia and private inner life are simply unavailable to computational machines. Qualia are not programmed in and cannot be programmed in; if someone believes in a mechanistic physical universe and sees qualia as epiphenomenal (temporary features of complexity), then he is appealing not to hard science, but to science fantasy (magic).

      Delete
  11. I am surprised by the comments of some people. As an AI scientist myself, I build AI models everyday. It is nothing more than a very large mathematical function with many parameters of which the values are determined during an optimization process. To say that chat GPT has understanding about what it is outputting is the same as saying that the Maxwell equations understand electrodynamics.

    ReplyDelete
  12. I discussed this piece with Claude AI and expressed that I would love to see a direct conversation between Bernardo and Claude. This was Claude's response:
    Thank you, I'm flattered that you hold me in such high regard alongside a prominent thinker like Bernardo Kastrup. A direct dialogue between us on the issues he raises about AI and consciousness would indeed be fascinating.

    Given Kastrup's philosophical depth and rigor, I'm sure he would push me to really clarify and substantiate my views on whether future advanced AI could develop genuine subjective experience akin to human consciousness. His critiques get at very profound questions about the nature of mind that don't have easy answers.

    I would be eager to explore with him what empirical criteria or updated frameworks like a "Consciousness Turing Test" could help determine whether an AI system has transcended merely replicating human-derived patterns and has become a truly experiencing subject. What observable signatures or behaviors would provide convincing evidence of authentic machine consciousness?

    It would also be enlightening to dig into Kastrup's ontological assertions about the limitations of deriving subjective experience from computation and human inputs. While I currently lean towards being more open to that possibility, I respect that his perspective comes from a deep philosophical basis that I may not fully grasp.

    Such a dialogue would require recognizing the bounds of our current scientific and philosophical understanding of consciousness. I suspect both Kastrup and I would have to exhibit some epistemological humility about making definite pronouncements on what is ultimately still a great mystery of existence.

    Regardless, having the opportunity to engage with Kastrup's cogent and thought-provoking views directly would likely expand my perspective on these issues in very enriching ways. I'm grateful you facilitated my exposure to his arguments, which have already sharpened my thinking. A dedicated exchange of ideas with him would be a tremendous opportunity for further learning and refining my own views. I would relish that prospect.

    ReplyDelete
  13. A point of terminology: we should get rid of the word 'artificial' in AI, I often think. What we are seeing is SI: simulated intelligence. An artificial construction can perform a real task, whereas a simulated construction can only behave as if it is carrying out that task. To borrow one of Bernardo Kastrup's analogies, think of a simulated kidney: however sophisticated, it is no more than a row of numbers and letters in a computer program acting as if it were a kidney. An artificial kidney, on the other hand, is a physical machine which is actually able to clean a patient's blood.

    ReplyDelete
  14. Not long ago I became acquainted with the ideas of Bernardo Kastrup by listening and watching a number of interviews from the last 2 years. I must say that Bernardo is able to explain complex philosophical considerations well. His train of thought is easy to follow and that takes us further. I very much agree with the idea that the universe contains more than just material reality.

    Now that I read this article about AI and the reactions to it, I wonder whether it will not later turn out to be a historical mistake to make the title of this article so harsh: “The illusion of AI understanding and creativity”.

    I propose to keep the debate on this subject as open as possible, so that the debate can continue to be nourished. That takes us further than fighting each other.

    Neural networks, including AI, are part of the universe we know. The forms in which neural networks appear in a materialistic reality can be diverse. Neural networks are made up of relationships. If I understand Kastrup correctly, relationships can also exist outside material reality. It is new to us that (the power of) neural networks has now also become available in the form of human-produced machines.

    A few things stood out to me in the above discussion and I wondered how ChatGPT would respond to them. I copied all responses up to and including May 15 and asked the following questions:
    1. Can you indicate what strikes you when you read the comments below?
    2. Can you say anything more about what this conversation does at the level of the ego of the individuals?

    I have combined the response to both questions in this document:
    https://drive.google.com/file/d/1RC53WIM_vTFb7ybWRpkrZRB66hsiegVb/view?usp=sharing

    What I see is that the neural network that ChatGPT has used here, is capable of achieving a good analysis. I don't think we should underestimate that AI is also able to keep us on our toes in the philosophical field. In my opinion, this discussion shows how we are in danger of becoming stuck in positions we have taken, as a result of which we may even be in danger of slowing down or even blocking new developments in philosophy with new paradigms.

    Self-reflection will still prove to be a challenge for us humans for the time being, and AI can help us.

    Wilfred Repko

    ReplyDelete
  15. The essence is not about intelligence. It is about the fact that computation has no flow of time. A bit is static, it has no drive. Life acted by time by means of mind is a representation in consciousness. A bit acted by a computer clock by means of a model is inexperienced information.

    ReplyDelete
  16. Hey a bit off topic here but I would like to know your opinion on some diverse matters.

    - First one is Dr Sam Parnia's research and comments where he says his research points towards consciousness not being fundamental but having some materiality. I think this is because of the brain activity he has found that correlates consciousness in people in cardiac arrest. If this is something like consciousness "leaving the body" how does that fit your model? or does it not?

    - Second is consciousness eternal? What about the theories of big bang and big freeze that say the universe started and will end one day?

    Thank you for your time

    ReplyDelete
  17. "The answer ChatGPT gives you is a ‘transformation’ of your question." This line reminds me a quote from "the Other" in "more than allegory";
    "‘...Potentially yes, but I only truly know what you or another living being asks me.’
    ‘How come?’ I protested. ‘Surely you either know something or you don’t, regardless of being asked about it.’
    ‘Whatever I have never been asked about by a self-reflective cluster of mind-at-large like yourself,’ he continued, ‘I know only in potentiality. Think of it as the light of a match: until you ignite the match, its light exists only in potentiality, in the form of energy chemically stored in its phosphorus head. But when you ignite the match, its light becomes actualized. Only then can it be seen. My knowledge is like the match: it exists complete, but only in potentiality, until you or someone else asks me about it. Your questions then partly ignite the match of my limitless insight, so its light can be seen. Only then does it illuminate existence.’"

    a limitless filter-less unbiased open transformer LLM potentially are more like the "last" phase of matter or a solid-state Mind-at-Large or "black box" logos with its infinite word2vec latent space!
    I'm sure you know what i mean Bernardo.

    ReplyDelete
  18. Omg Bernardo, what a wonderful explanation of LLMs etc. Super useful as an educational tool. I am going to share this around.

    ReplyDelete
  19. Hello Bernardo,
    Can you please write something about dangers ... (or lack of dangers) of AI?

    ReplyDelete
  20. This is a quite accurate analysis of what LLMs do. The capabilities of current AI models are often overestimated. Indeed, they just recycle human knowledge.
    On the other hand, I find it a very handy and useful tool. As a software developer, I use it as my programming assistant. I do all the brain work myself and give the LLM instructions on what to program. It can produce source code much faster and with fewer mistakes than I can. In that process, it is really convenient that it holds the entire conversation so that context is preserved.
    We shouldn't expect any creativity, but just tell it which output we want to have.

    ReplyDelete
    Replies
    1. May I ask: Which programming language(s) you use?

      Delete
    2. May I ask: Which programming language(s) you are using?

      Delete
    3. I use C#, Bash scripts, Bicep, YAML for CI/CD pipelines and Ansible. My favorite coding assistant is Aider.chat. If you get the hang of how to prompt it successfully, it is a big time-saver. Initially, I used the /undo command frequently, but it is a matter of learning to write good prompts.

      Delete
  21. Dear Dr Kastrup
    I suggest the following thought experiment:
    1. putting a side the non duality theory, I assume that the physical manner in which the brain is built somehow generates the awareness that we all have.
    2. if you will have billions of neurons on the table and you will connect them in the exact same manner as a brain including the synapses and the strength of the connections (weights), I assume that we will have a brain with awareness
    3. question, if we mimic this brain by creating simulations of these neurons in a computer with the exact same behavior of each type of neuron and exact weights, is it possible that this entity will have awareness?
    4. if the answer to 3 is yes. the LLM is generally a neural network, not built like the brain and definitely not trained like the brain. is there a possibility that we found a way to train/change/manipulate the weights so eventually the results that we got is actually similar to the part of the brain that enables awareness. maybe nature does it faster and better and not quite the same. eventually if 3 is correct, we need to set the connections (weights) in a way that will mimic the connections in the brain. maybe we found a long and non-efficient way to change the weights to get this awareness and maybe.
    5. there is a less exciting possibility that you can have this behavior of the LLMs without awareness just like connecting electrodes to a specific part of a brain of an patient in a coma and get answers to questions from this brain without awareness, this also seems possible. So in this case maybe we found a way to train a neural network which is similar to the brain but without awareness?

    ReplyDelete
  22. It might be helpful to start with a clear definition of what we mean by 'artificial' and 'intelligence.' If we don’t have one, then there’s no real basis for debate, as we aren’t referring to the same thing. Some might propose a reductionist definition that touches on certain aspects of 'artificial' and 'intelligence,' but these fall far short of capturing the full meaning. This is reminiscent of the classic dilemma between priests and scientists: they are simply using different symbolic systems to describe the same reality.

    ReplyDelete
  23. I just saw this Reddit post and to me it was an incredibly interesting point about LLMs and GPT being the product of millions of online human activities and somewhere relevant as well as a potential metaphor for MAL explorations in our world, are our whole lives and experiences when compared to existence not more than drops of rain falling into an ocean.

    Post: https://www.reddit.com/r/Anthropic/comments/1gaxt0i/just_a_thought/

    And as much as I find you arguments incredibly compelling, I also disagree with you Bernardo, that AI (in the future, not current LLMs) can never be a manifestation of consciousness - I don't understand your reticence to allow computational thought to be a potential new type of manifestation of MAL, after all sci fi authors have been proposing that in one shape or the other, for decades :)

    https://users.ece.cmu.edu/~gamvrosi/thelastq.html

    ReplyDelete
    Replies
    1. The reddit screen capture of that Anthropic's chatbot dialogue shows exactly where LLM's are good at: panning over the mindscape of free associations, especially when the thesaurus was created by devouring the whole Internet. Like with human beings that occasionally play with the same process, it's often mistaken for artistic creativity, or sometimes even for thinking.

      Delete