Skip to content

Quantum Computing: Why Neuromorphic AI Needs a New Classical Storage

    By David Stephen who looks at Neuromorphic AI in this article. 

    What are the structural foundations of human intelligence, in the brain? Simply, if intelligence is the use of memory, what is the architecture of human memory that makes intelligence, as an outcome, exceptional?

    If AI would at least match human creativity and innovation, at the measure of extraordinary advancement, it may require more than just scale [of compute and data], which large language models [LLMs] currently have.

    Neuromorphic AI Needs a New Storage

    The trajectory of artificial intelligence towards artificial superintelligence may stall, without a new classical memory architecture — for storage, similar to the human brain.

    Humans do not have complex intelligence because humans have a unique memory of every sensation. No. Human memory, conceptually, is mostly a collection of many similar things, such that the interpretation of anything is done with the collection, not with specificity, for the most part.

    If an individual sees a door, or hears the sound of a vehicle, it is almost immediately interpreted, so that the relay [for what to do with it or not] proceeds, without intricate visits, to respective [unique] storages.

    This fast interpretation objective ensures that it is possible to make quick decisions on several number of things using a general mode, so that when they are to be operated or improved, it is not always with intricacies, delaying efficiency.

    Also, because the interpretation came from the collective storage of doors or of the sound of a vehicle. This does not mean that there isn’t specific knowing of things, there are, but they are generally fewer — aside language — and exist separately from the pack. Still, what gets used [say in language] may come from collections.

    An example of this is speaking, where, even though words are specific, what presents sometimes may not just be what was expected but something within the collection. However, language is still easier because of learning early. How so? Several memories exist separately from early on, but tend to collect, because of similarities, conceptually. Yet, language stays mostly that way even though there are collections with images, sounds, scents and other similarities of the same thing.

    A disadvantage of collection is that learning [say language or advanced physics, for a non-physics person] as an adult has to join collections not just exist alone. That process is slower than early on, resulting in delays. Specificity on the other hand, as an adult too makes it tough to know many faces more easily, and so forth.

    Collection

    Now, because the group is used for interpretation, it is easier, generally, to make decisions faster, and have relays [or transport] within the brain get around with little barrier for whatever results are sought.

    Also, most collective storages have overlays, where it is not just the collection but where a collection overlaps with another one. Simply, aside from a collection of door, there is an overlay of a part of it with wood, or with safety and so forth.

    Human Intelligence

    If the goal of an individual is to improve something, say an art, by some creative action, it is generally easier to have lots of relays across collective storages and their overlays. Simply, storages in the mind are structures that allow to pick what is vital and also re-combine them.

    Some overlays may not even be obvious but storages might set them, that by the time relays get there, it is possible to find something new. Some overlays are not fixed as there might be several options they are connected to, so they rotate from some to others, from time to time.

    This is a reason that even when people do the same thing often, they still do it in slightly different ways.

    Aside from storages, relays are also excellent, shaping how reaches are found, using different dimensions, toward goals of improvement or operational intelligence.

    Simply, storage is a major factor in what makes human intelligence excellent.

    AI Superintelligence and World Models

    It would be possible that as compute gets better and algorithms, AI would improve. However, classical storage or how the data that AI uses is stored, would need to mirror the brain, for much better results.

    This means that groups and overlays of groups for what is similar. This could be done at the hardware level, especially with say, collective magnetic directions or electrical charges of memory cells. It may so be done with new memory protocols. But data must be organized like the brain for collectives and overlays.

    Already, deep learning architectures are so excellent that they are pervasive relays over data. However, the present storage structure of digital data is too specific, limiting how they can collect groups of trees, like in a forest, not singular trees.

    Innovation towards superintelligence, beyond neurosymbolic AI, as well as neuromorphic computing and world models would require a new memory architecture, without which it may be tougher to achieve AI superintelligence.

    It is possible to accelerate this concept in a research design to be ready before the June 30, 2026, while also laying the ground for new modalities in quantum computing towards 2030.

    There is a recent [November 26, 2025] paper in Nature, Ferroelectric transistors for low-power NAND flash memory, stating that, “NAND flash memory is essential in modern storage technology, amid growing demands for low-power operation fuelled by data-centric computing and artificial intelligence. Its unique ‘string’ architecture, where multiple cells are connected in series, requires high-voltage pass operation that causes a large amount of undesired power consumption. Lowering the pass voltage, however, poses a challenge: it leads to an associated reduction in the memory window, restricting the multi-level operation capability.”

    “Here, with a gate stack composed of zirconium-doped hafnia and an oxide semiconductor channel, we report ultralow-power ferroelectric field-effect transistors (FeFETs) that resolve this dilemma. Our FeFETs secure up to 5-bit per cell multi-level capability, which is on par with or even exceeds current NAND technology, while showing nearly zero pass voltage, saving up to 96% power in string-level operations over conventional counterparts. Three-dimensional integration of FeFET stacks into vertical structures with a 25-nm short channel preserves robust electrical properties and highlights low-pass-voltage string operation in scaled dimensions. Our work paves the way for next-generation storage memory with enhanced capacity, power efficiency and reliability.”

    David Stephen currently does research in conceptual brain science with focus on the electrical and chemical configurators for how they mechanize the human mind with implications for mental health, disorders, neurotechnology, consciousness, learning, artificial intelligence and nurture. He was a visiting scholar in medical entomology at the University of Illinois at Urbana Champaign, IL. He did computer vision research at Rovira i Virgili University, Tarragona.

    See more breaking stories here.



    irishtechnews.ie (Article Sourced Website)

    #Quantum #Computing #Neuromorphic #Classical #Storage