Difference between "true" AI and "applied" (legacy) AI



What we have seen so far with IBM's Watson, ChatGPT, and Google's many AIs, is not "true" intelligence. All of these examples rely on neural networks - a technique first invented in the 70s and 80s, that has seen a renaissance due to hightened computing power. But neural networks are not really intelligent. They merely learn to simulate some form of intelligence. That can easily be proven to be true, when you consider the fact, that a NN needs millions of examples to learn, for example, what a cat looks like. It literally needs millions of images of cats, to reach a high degree of certainty (something like 95-98 percent) in determining if a given image contains a cat or not. A child, on the other hand, merely needs a handful of examples to learn what a cat is - and learns a lot more in the process: how cats behave, what sound they makes, what colors their feur might have, and so on. So if NNs aren't really intelligent, then what is? Well, we maintain that our approach yields "true" intelligence. True intelligence is not just having one faculty (all NNs can only be trained to do exactly one thing), but a collection of capabilities: draw inferences, detect analogies, creatively and originally form new concepts and ideas, needing only one or a handful of examples to be able to abstract a pattern from it/them ...

If you want to compare the current, neural-network-driven state of the art in AI vs. a classically, analytically hand-written ontology for an AI-problem, the benefits of this project's approach - having an actual theory, or better, a model for thought as the core element - can be summed up in one short and simple statement: our way will yield a "true" artificial intelligence - i.e. one that can think exactly like an extremely well educated and lettered person does. Drawing inferences, making logical deductions, forming analogies, creating metaphors, differentiating between truth and falsity - all of these capabilities amount to an intelligence that will not only rival the human - it will far surpass it; we are talking about orders of magnitude here, regarding speed, amount of knowledge, maximally detailed knowledge, plus the capability to transfer one idea or concept in one field (maths, for example) and apply it in a totally different one (chemistry, for example), exactly when such a transfer is warranted, plausible, or needed.

The reprecusssions of this invention cannot be overstated: we are talking of a breakthrough technology - one that will change overnight what we think we know what computers can and cannot do, and shake up the whole tech world. We believe, hardly anyone is going to invest resources into the current state of affairs - i.e. finding real-world applications for trained neural nets - when there is a general intelligence avaliable - one that can do it all, without specifically being trained by millions of examples. Instead of using the one-trick pony (which all neural nets are), that you have to teach with millions of examples, why not use an AI that can teach itself, where all you have to do is give it the general parameters and goal(s) of the task at hand, just like you would give them to a human actor? Moreover, as anyone with a certain amount of knowledge about neural nets will tell you, they are not intelligent in the way the word is commonly understood. They seem< to have some limited capability, limited to the purpose they were created and trained for. But that is not intelligence as we normally use the word: intelligence is the sum of all human mental faculties, not just summing up a text, or making suggestions for some limited, specific scenarios, or writing an email, which you don't have time to write yourself. These are all neural-net based, "applied" instances of "intelligence" - where "applied" actually means "limited in scope to that one specific task". This is in stark contrast to "true" or "general" AI - and we are convinced, askabl will merely be the first instance of this new kind of AI. .

Why start with language?



Our AI will collect up capabilities as we go along - so why have we chosen to start with language? Well, for a whole catalogue of reasons. First, mastering language to the degree of an adult human speaker is somewhat of a (if not the) wholy grail of computer science. Second, all human mental capabilities are either based on natural language, gravitate around it, or can be expressed, described, or defined by natural language. Open a maths textbook, for example: most of it will not be equations and formulas: most of it is just text, written in natural language. The same is true of virtually all scientific fields. Even things like arts or music, as well as human emotions, instincts, desires and wishes can - to a degree - be described or at least talked about using natural language.


Outlook



But we won't stop there. Since askabl is able to form its own thoughts and ideas, creatively, and without outside influence or onset, some day in the future askabl will encompass truly everything human beings are capable mentally. That is, it will be able to compose music, it will be able to make jokes, it will be able to write novels - and it will become more human than anyone of us - including the creators - can imagine.

Technicals



It all starts with this one lightbulb moment: it is a feeling, an intuition, that lets you know that you have mastered the problem currently at hand - without explicitly knowing how or why you got there. Everything starts with this one moment: when your mind kickstarts an understanding of the issue, the details of which you are sure can be worked out over time. You might not yet know what to the solution will look like in the end - however, you are certain, that there is a way; the intuition of the solution that had just presented itself to you, and it is only a matter of time, and applying some logical thinking, before you have worked it out in all detail.

Probably all scientists, especially all logicians and mathematicians know about this instant, this short, fleeting moment, in which you step up the ladder, and a piece of knowledge on something presents itself to you; a solution to a problem, the missing piece of the puzzle, or a piece of proof, which you had sought, maybe over months, or even years, but had eluded you so far. It is one of the most joyous feelings, maybe the most so, one can have, when intellectually engaging a specific problem, or a whole range of those. Learning can be tough, rewarding, and joyous - sometimes all of those things together at the same time - but nothing beats the experience of this one instant in time, when a key is turned, a box is opened, and the solution to a diffdicult problem is rolling itself out in front of you like a red carpet. It is a like stepping into clear light, rising up above the fog of uncertainty. A feeling of joy and pride, that is unique and extraordinary.

OK, this is supposed to be a the technicals page, and not the page for metaphoric maeandering - and we will get to the those in a second. Yet, the inner workings of our minds, when confronted with a complex problem, is an interesting piece of experiencing one's humanity, worthwile some attention: it has a very non-linear, seemingly chaotic, and always unpredictable character, because you never know, when this sort of moment might hit you (if it ever does) - when your intution, experience, and prior knowledge, both your feelings and your intellect, seem to align perfectly for just that one short instant, and you suddenly step up, become this one bit wiser than before: you become this one bit more, mentally, sometimes even spiritually, as a human being.

The core algorithm



On the main page, there is a comparison between the advantages and disadvantages of Neural Networks vs. a classically, analytically hand-written ontology for an AI-problem. The difference is absolutetly crucial to the project: the importance of an algorithm, that enables the machine to think exactly like a human being, can hardly be overstated - and, naturally, this algorithm is situated at the very core of our AI. While no technical details of how this core element works will be disclosed, due to reasons mentioned here, one of it characteristics is also one of the nicest parts about it: the core algorithm does not require a lot of code, nor huge amounts of processing power - it is one homogenous block, consisting of only a few hundred lines of code; and it is arguably the most elegant part of the whole project, because it produces so much, while occupying so little space as well as needing little processing resources at runtime: it is the part that elevates the inevitable mess of details, the yet uncoordinated, bits and pieces of information, e.g. regarding grammar and syntax of a sentence being read in, into one coherent, quite beautiful mathematical structure - the machine equivalent of forming an original idea, having a novel thought. A few hundred lines is really nothing - a modern OS has anything from a few hundred thousand to even millions of lines of code. This short piece of code, the core algorithm, directs and structures the in- and output of about a hundred neuronal nets, all serving one specific task. The core hords bits and pieces around it, which are necessary to make the whole thing work, that not only make up the bulk of the coding work, but also consume the most processing power: from seemingly trivial things like splitting a text into sentences, to analyzing a statement's grammar, to preparing its syntatic information. Then the myriad of semantically trained neural networks step in, each of them searching for specific properties - for patterns of meaning they are have been trained to detect. When all of this is done, the core algorithm kicks into gear: if there is a deus ex machina moment in the project, this is where it happens - the element the makes the magic happen, collecting up and structuring all previously gathered information and form it into one coherent, consistent overall pattern - what we call meaning.

Keeping control vs. letting machine intelligence decide



In nearly all AI algorithms, methods, or techniques there is this dichotomy between controlling what is happening in a complex machine like a computer - classically totally meticously, and down to every last detail - and letting go of control, having the computer "do its thing", i.e. letting the machine decide. This duality has virtually nothiing to do with the many popular visions of an AI-controlled future, expressed in countless books and movies, which usually spell out the doom of all humanity in a dystopian time, when the machines take over the world: virtually all of today's AI techniques have a management layer on top, which is completely analytical, i.e. controllable by humans down to the last detail.