Cognitive Software Inc. is the ultimate holding company of a group of companies to include:
- two software development laboratory companies – one in each of Sydney, Australia and Austin, Texas.
- A US law firm to model our SPIDERS cognitive AI platform for the legal industry, and drive its adoption strategy.
The group of companies is being driven as an Austin, Texas based United States corporate entity as resources are deployed in Austin through FY2024 beginning 1 October 2023.
Cognitive Software Inc. expects to attract investment of at least US$50 million over the three years period 1 October 2023 to 30 September 2026.
Our CEO and CTO joined IBM Australia as university graduates, pursuing careers in software engineering. Initially working in IBM’s in-house global systems development, both eventually moved to senior and high-profile management and technical roles respectively, where they worked on world-leading customer projects that attracted international attention.
Their software engineering journey began around the time IBM announced its mainstream relational database management system, DB2, in 1983. In the twenty years prior to that evolution of data management automation, IBM’s renowned and respected Research Group had been involved in the history of development of AI technologies. Those early technologies were not feasible because, at that time, the computing functionality and power the AI technologies required from both hardware and operating system software was inadequate.
In 1965 Intel co-founder Gordon Moore forecast that the number of transistors on computer chips would double every two years. In the near-sixty years since, his prediction has proved true, resulting in hyperbolic growth in computer processing speeds. In addition, we have seen the evolution of cloud computing that effectively and efficiently shares resources such as data storage, processing, and high-speed memory.
The processing power required for AI has now arrived and enabled algorithmic models for so-called ‘Machine Learning’, where we now see consumption of massively large amounts of text and data and its regurgitation as elegant prose. Machine Learning is generally hyped as evidence of artificial intelligence but, as clever as it is, Machine Learning is not AI. Indeed, the most recent advances in Machine Learning, known as Generative Pre-trained Transformer, “GPT”, or “Generative AI”, and typified by ChatGPT, are not reliable enough to be considered intelligent by experts and, indeed, are increasingly considered by the AI expert community to be potentially dangerous in some applications because the integrity of its output cannot be guaranteed. [Source1]
“With A.I. behind more books, the possibility of getting disastrous guidance is increasing.” [Source2]
The online world today is awash with a tsunami of misinformation.
Misinformation is a threat to our economies because it can be used by governments to benefit their particular constituencies, or businesses to enrich their owners and executives, or perverting the course of justice, or malicious defamation of enterprises and people, and all of these can cause misallocation of society’s resources.
Misinformation is also a threat to our democracies because misinformation can be used to divide our communities who are too often deceived by politicians, the media, and large enterprises. Today, misinformation is a dangerous weapon of politics in many advanced economies, and it can foster a breakdown in law and order such as we saw in the United States on January 6, 2020, and government employee corruption as we have seen in recent Australian scandals in recent years.
Powerful groups in society, especially sections of the commercial media, including advertising-focused internet search, are not focused on combatting misinformation. Indeed, some powerful sections of the media see their role as infotainment, and the more sensational it is, the better, to hell with the truth!
Cognitive AI is the only way societies can address misinformation because there is now too much misinformation for humans to correct and many of us have reached the point where we simply don’t know what or who to believe.
Governments have generally been slow to react to misinformation. The polite description of government misinformation includes terms like propaganda, spin, coverup and, wait for it, “non-disclosure in the public interest”.
“Research funded by the Cyber Security Cooperative Research Centre finds that a focus on disinformation campaigns is important and necessary. CSRI and Hub researchers propose legal sanctions against the most insidious forms of disinformation, which we refer to as disinformation campaigns.” University of NSW - Source.
Tackling misinformation is now emerging as a societal priority ranking in importance with climate change. Apart from elements of our own governments engaging in misinformation and hiding information, misinformation is becoming a powerful weapon waged by non-democratic governments to damage our democracies through cyber espionage and hacking. Cognitive AI systems are required and, to be effective, must be able to distinguish fact from fiction in real time.
We believe that for this to occur, we must respond with science.
“Although, recent past has also proven us to be in a time of post-truth, where ‘alternative facts’ or fake news run rampant and mistrust in science is rife. Often, the finger is pointed at the spread of social media as the culprit, but it must be said that it is not the only one. Misinformation encourages a culture of suspicion with regards to scientific facts. However, simply pointing it out as such is not the remedy to reverse mistrust in science; confidence in science could be interpreted as a subjective ‘opinion’ and, as such, could drive the opposite effect further reinforcing doubt in a ‘denier’.” - Yves Laszlo, Professor of Mathematics at Université Paris-Saclay - Source
We believe social media has played a very important positive role in giving a voice to the world’s many unheard and to promote diversity of opinion and knowledge, but it has also been abused because of the ease with which misinformation is facilitated; a double-edged sword.
“Facts versus opinions. An important distinction to make clear when science is an issue is the difference between fact and opinion. "Fact" in a scientific context is a generally accepted reality (but still open to scientific inquiry, as opposed to an absolute truth, which is not, and hence not a part of science). Hypotheses and theories are generally based on objective inferences, unlike opinions, which are generally based on subjective influences. For example, "I am a humorous person" is certainly an opinion, whereas "if I drop this glass, it will break" could best be called a hypothesis, while "the Earth orbits the Sun," or "evolution occurs over time," or "gravity exists" are all today considered to be both facts and theories (and could possibly turn out to be wrong). Opinions are neither fact nor theory; they are not officially the domain of science (but don't go thinking that scientists don't have opinions — they are only human, and opinions often help to guide their research). Thus, science cannot directly address such issues as whether God exists or whether people are good or bad.” – Dinobuzz, University of California Berkeley - Source
Like other current efforts to facilitate true AI, our solution is being designed as a truth machine that applies existing technologies in a modern way, including:
- hypotheses such as reasoning, probability analysis, event studies, and causal analysis.
- technologies such a Resource Description Framework (RDF) and equivalent technologies, and semantic computing.
Our truth machine, the Semantic Processing for Intelligent Data Entry and Research System, “SPIDERS”, will collect date from the webs of online and offline information sources and create highly secured data vaults of information about entities such as people, companies, and events, and the relationships between them.
Information is initially categorised as Truth (a generally accepted reality) or Opinion, with the latter being sub-categorised as probable, possible, or improbable.
The Test Bed
We have chosen the legal services industry as our development test bed as this is an area in which truth is paramount.
Thomson Reuters recently “released a new wide-ranging Special Report, Future of Professionals, which surveyed more than 1,200 individuals working internationally” in various roles as legal, accounting and other advice as senior professionals. Their findings included:
- more than two-thirds (67%) of respondents said they believe AI will have a transformational or high impact on their profession over the next five years
- almost as many (66%) predicted that AI actually will create new professional career paths
- 45% saying their biggest hopes for AI was in improved productivity, internal efficiency, and client services
- 67% of respondents indicated their biggest personal motivator was producing high-quality advice.
“Through the application of AI to perform more mundane tasks, professionals have the unique opportunity to address human capital issues such as job satisfaction, well-being, and work-life balance,” Steve Hasker, president and CEO of Thomson Reuters, explains.
Among legal professionals, improved productivity and efficiency were seen as the biggest positive effects of AI (75% and 67%, respectively). And among those respondents at law firms more than half (55%) see AI as an opportunity for increased revenue and lower costs. Further, a large majority (81%) of legal respondents said they expect new services that will create new revenue streams to emerge within the next five years.
A majority (58%) also said they anticipate a rise in their professional skill level, while more than two-thirds of legal professionals see a more consultative approach to advice emerging.
Importantly, these professionals were talking about the plagiarising, integrity lacking, Generative AI!
Imagine what the above professionals would say if they thought a fact-based artificial intelligence technology was on the horizon of widespread availability.