About
Whats New in AI?
The concept of semantics in 1883, linguistic language models between 1906 and 1912, natural language processing (“NLP”) in 1916, neural networks in 1943, symbolic computation with the christening of AI in 1956, the first trainable neural network “Perceptron” in 1957, the first computer game using machine learning in 1959, natural language processing and the Weizenbaum chat-bot “Eliza” in the 1966, expert systems 1970’s, (small) language models and more advancements in symbolic computation in the 1980’s and the 1990’s, fast computing statistical models for NLP 1990’s, ontology software in 1993, world wide web in mid-1990’s, “Deep Learning” a multi-layered Neural Network in the 1990’s, semantic graph data information structures in the late 1990’s, semantic web in 1999, the Generative Adversarial Neural Network in 2014, Generative Pre-Training (“GPT”) in 2018, “smarter chat-bots” (ChatGPT) in 2022, a design for large language models used for the new chat-bots open-sourced in 2023.
[Source] https://www.dataversity.net/a-brief-history-of-large-language-models/
Because GPT-based chat-bots are, like Weizenbaum’s “Eliza” was in 1966, “made to behave in wondrous ways, often sufficient to dazzle even the most experienced observer”, GenAI has been the subject of much hype and excessive investment measured in the billions of US dollars.
However, unlike Weizenbaum’s conclusion about Eliza, GenAI will not be “moved from the shelf marked "intelligent", to that reserved for curios” because, while GenAI does not display machine intelligence, it is a useful technology and an important AI subsytem.
The future evolution of AI requires two things:
- the packaging of neuro-symbolic AI for truly cognitive, thinking machines i.e. the successful combining of machine learning and symbolic programming to facilitate ontological reasoning
- the rapid training of a global workforce of what we call “information engineers”.
CSI’s Founders started their computing careers as software engineers at the time software automation was accelerating and the world wide web of today was being born. One of the Founders is a globally recognized expert in computing, at its most fundamental level; a former member of IBM’s elite Gold Consultant team. The other Founder migrated from software engineering to commercial management of technology roll-out, to managing world-class innovation meeting the global gold standard of US patents.
The Founders claim they have found a path to Artificial General Intelligence by combining machine learning and symbolic programming to facilitate ontological reasoning to automate reasoning at the level of a judge in the Supreme Court of the United States.


The Legal Industry - Our Test Bed
Reasoning
All AI experts now agree that the most important advance needed for the next generation of AI is automated reasoning. The essential characteristic of all intelligence - humans, animal, and computers – is the ability to reason at some level. We are still learning how and where humans accumulate, store and retrieve knowledge (and wisdom). In computing we use sophisticated software to store and retrieve knowledge in and from computer memory. The sophisticated software is known as a Knowledge Graph and the two main types are the Labelled Property Graph (LPG) and the Semantic Graph.
An Ontology is a component of Knowledge Graphs that describes a concept, such as dogs. In addition to Ontologies, a Knowledge graph will contain data that provides details of one or more instances of the concept, such as your pet dog Fido the german shepherd. There we have a simple Knowledge graph that includes a generic description of dogs (the Ontology) plus a real example of one (the data). Relationships between the data items are called edges, and they are also able to be described in some depth to enrich the knowledge graph.
Ontologies are the most sophisticated way to describe a concept and a Semantic Graph has advantages over a Labelled Property Graph. Unsurprisingly however, Ontologies and Semantic Graphs sophistication can introduce complexity requiring considerable expertise. The task of AI technology inventors like CSI is to make AI platforms easier to use by keeping the sophistication and complexity hidden under the covers with a user-friendly interface.
How to facilitate computer intelligence has been the biggest challenge in computing since the Dartmouth Summer Research Project on Artificial Intelligence that kicked off on 18 June, 1956. As many different technologies and approaches have been developed and tried, we have come to realise that AI will be facilitated by multiple techniques and technologies, indeed, perhaps one hundred or more. We pretty much know what they are now. So, hiding the sophistication and complexity of those many new technologies under the covers for ease of use and widespread adoption is now the major challenge ahead of us. Not least, ensuring humans can collaborate with and control those technologies is also critical for safe and responsible AI.
Many of the above core technologies for AI, though well understood, have been sidelined by the recent hype around one of them, GenAI. The most recent advance in decades old Machine Learning, GenAI has been nonsensically promoted as the gateway to super-intelligence. GenAI can be prompted to provide eloquent responses emanating from massive “Large Language Models”, but it doesn’t have the ability to reason. Proponents of GenAI say that they are working on reasoning as an objective of so-called “agentic systems”, but given the importance of reasoning for intelligence, that would be a case of the tail wagging the dog. Reasoning is the core technology required for intelligence and, like building a house, a computer systems architecture needs to start with the core requirement and cater for the many additions that might be desirable rather than essential.
So it is with computer reasoning, and CSI researchers have approached AI development with the view that the reasoning function needs to be the first step in the design and the powerful data gathering of technologies like GenAI are positioned alongside.
We chose to test our new approach to AI invention against the benchmarked requirements of the legal profession for the following reasons:
- there are many flavours of reasoning models and the most demanding of them are the nuanced features of the Legal Reasoning skills we expect of senior jurists (academic and judicial).
- An example of benchmarks for AI Legal Reasoning, “AILR”, had been proposed by Lance Eliot PhD in 2009 and was helpful in our taking a step by step approach to our computer reasoning research.
- in a previous venture, the Founders of CSI had been witness to three examples of systemic corruption of the Australian corporate regulatory regime. The varying magnitude, complexity and national economic implications of the corruption, and our intimate knowledge of the details, have allowed for real world prototyping of technology approaches to AILR.
- the Founders believed that many of the building blocks of AI had been proposed and studied in AI’s 70 year history of study but that few, if any, seem to be able to put the right pieces of the jigsaw in the right places, albeit they think that will change soon.
- Our belief that a computer reasoning solution for AI would be transportable to other knowledge and truth-telling domains such as medical research and next generation, safe and responsible social networks that seek to eliminate misinformation and other harmful content.
Power to the People
In 1965 Intel co-founder Gordon Moore forecast that the number of transistors on computer chips would double every two years. In the near-sixty years since, his prediction has proved true, resulting in exponential growth in computer processing speeds. That has supercharged one AI subsystem ‘Machine Learning’, where we now see consumption of massively large amounts of text and data being regurgitated as human-like elegant prose and sophisticated image manipulation to0 the extent that it has been renamed to Generative AI, or “GenAI”.
In the absence of other AI subsystems though, GenAI is prone to hallucination (making stuff up), bias, and basic errors (even in relatively simple mathematics). In short, GenAI doesn’t exhibit intelligence at any level, and is opaque to the extent that its conclusions are unauditable.
In parallel with the exponential growth in the processing power of High Performance Computing microchips, we have seen the evolution of cloud computing that effectively and efficiently shares resources such as data storage, processing, and high-speed memory. We have also seen the emergence of Hybrid Cloud (mixing on-premise resources with cloud resources) to fundamentally increase the range of processing options for the next generation of AI.
The CSI founders call this next generation Cognitive AI because it combines the high speed gathering of data for implicit meaning from machine learning with the slower, step-by-step, and explicit meaning of information, and eventually, wisdom.
This hybrid version of AI uses neuro-symbolic approaches for machine learning and knowledge representation to provide cognition facilitated by very advanced reasoning that will equal or better humans most of the time, with human construction and under human supervision.
This emerging level of AI is not to be confused with the still mysterious self-awareness we find in nature, especially in humans. Machines operating at that level are a long way off and remain a subgenre of fiction.


The Misinformation Challenge
The online world today is awash with a tsunami of misinformation.
Misinformation is a threat to our economies because it can be used by governments to benefit their particular constituencies, or businesses to enrich their owners and executives, or perverting the course of justice, or malicious defamation of enterprises and people, and all of these can cause misallocation of society’s resources.
Misinformation is also a threat to our democracies because misinformation can be used to divide our communities who are too often deceived by politicians, the media, and large enterprises. Today, misinformation is a dangerous weapon of politics in many advanced economies, and it can foster a breakdown in law and order such as we saw all around the world today.
Powerful groups in society are not focused on combatting misinformation. Indeed, some powerful sections of the media see their role as infotainment, and the more sensational it is, the better, to hell with the truth!
Cognitive AI will address misinformation because it can automatically shield us from misinformation and other forms of abuse, and it can facilitate automated truth-telling. Indeed, Cognitive AI is the only way to manage misinformation because there is too much of it for humans to correct, and many of us have reached the point where we simply don’t know what or who to believe.
Governments have generally been slow to react to misinformation. The polite description of government misinformation includes terms like propaganda, spin, coverup and, wait for it, “non-disclosure in the public interest”.
“Research funded by the Cyber Security Cooperative Research Centre finds that a focus on disinformation campaigns is important and necessary. CSRI and Hub researchers propose legal sanctions against the most insidious forms of disinformation, which we refer to as disinformation campaigns.” University of NSW - Source.