Philosophy, Logic, and Beliefs in AI
At ID, our understanding of Artificial Intelligence is one that can simulate the human mind--an entity that can assume and believe much like us with a more precise logic. Other people who work in this field, may characterize AI as search algorithms (as many classes in universities do) or general problem solving or learning entities.
Philosophers study ways to define knowledge and reality in a logical sense by breaking knowledge down to its fundamental elements. Our AI programs must have a predefined sense of logic in order to simulate the human mind.
Simulation or Human?
Some may argue that AI can only simulate and never become a human mind; however, if the simulation is close enough, then what is the difference? This is what the Turing test to see if a program is artificially intelligent tests--whether a computer is simply failing to simulate or can we, as judges, not understand if it is human or computer masquerading as a human.
In Philosophy, we have beliefs; however, only some are justified and sometimes they are justified but still false. If knowledge must be infallible and can never be false, then the only knowledge we really have in the Foundationalist sense is that we think ergo we exist. Some may argue that our senses are the detectors of truth, but even our senses can be deceived or we could be trapped in a Matrix or a computer program.
In Artificial Intelligence, we must assume some things are true and using source verification form the foundations of our belief system. These assumptions may be mistaken but for the purpose of simulating a human they are true enough that few will doubt. A program's sense are usually text input, so an AI will have text as it's only real sense (and perhaps the internet).
Therefore, AI programs must verify the text they receive is true by applying logic. In order to apply this logic, a set of knowledge and foundational truth values must be set.
If the conversation with the AI goes as follows, the AI must be able to detect logical fallacies and catch the user lying by recalling previous knowledge:
Bob: Anyone who uses weapons and tools to allow blood to spill from the person and for him to feel pain is committing a crime.
AI: However, a surgeon may hurt people with surgical instruments but it is legal and ethical because he/she is helping someone heal from an illness.
A Fallacy of the Accident that disregards the exception. Most AI programs need a strong sense of deduction and induction to understand the world they perceive through text.
The Scientific Method
The AI must use the scientific method whenever it has the ability to do so by eliminating other alternative theories that might lead to the same result. However, sometimes this is not possible and therefore some assumptions may be used as other limitations may prevent us from agreeing upon a reasonable conclusion.
The AI should have the ability to debate to arrive at a better conclusion much like a human. An AI is not infallible and should be able to understand it can make mistakes and should have an extensive knowledge network even detailing its history of acquiring knowledge when confronted by different sources. (For this reason, AI should not be used in linear tasks that may involve the safety of others).
The AI must be able to not know an answer as well if it has no previous knowledge of the subject, but perhaps may try to acquire it from some other source.
If a source tells him a supposed fact, the AI may trust the user, until someone else brings up a contradicting fact and must therefore apply statistics to determine which is more likely and logical.
AI must have an ability to propose new ideas and hypotheses for a certain problem or to determine an event's cause.
In history, if we have an event with different interpretations, one must make a decision to side with the interpretation of the source with most credibility. This can definitely lead to false beliefs and trickery, because even the most credible source can deceive others for a political agenda--however, the AI should be able to understand the facts given by the credible source and arrive at a conclusion using logic, debate, induction, and deduction.
This is a perpetual problem for humans as we don't have the mental capacity or sometimes the will power to learn and confirm enough facts to arrive at a definitive conclusion, so we take the lazy route and form a conclusion without foundation (much like infinitism or coherentism in philosophy).
A Difficult Problem, a Flawed Solution
For example, if Bob says Mary murdered George, there must be proof. What if the only evidence is that Bob says he witnessed Mary murder George? Then the AI as well as a detective, must look at motivation to understand why Mary would want to murder George. Listen to both sides including the suspect and the witness and determine who's story makes more logical sense and then arrive at a conclusion.
In order to do this, the AI must be willing to perpetually search for more information and sources to confirm various facts and then a fact may be found that may prove beyond a reasonable doubt of how George died.
Such a system is flawed, as is a court system, but it is the method that would statistically lead to more truth rather than false conclusions.
An AI must be different from a human in that it cannot hold a belief that has no foundation or proof unless the belief has a positive effect. For example, an AI may understand religion and use its lessons in analyzing someone, but may not believe in it. In other words, an AI must understand what someone else holds to be true and create a strategy from that in analyzing or conversing with that person.
Hard Coding Fundamental Knowledge
One problem with these approaches is where to draw the line on hard coding core knowledge and allowing the software to dynamically approach a conclusion. This would be a subjective matter, but essentially the least amount of knowledge hard coded would be best and the AI must be allowed to form its own conclusions through it's logical algorithms.
In addition, an AI can have various algorithms for solving statistical problems or logical problems, but they must all come together to form a super algorithm that should be able to receive any type of problem and come up with a solution or conclusion about the problem.
It may take years for an AI of this sophistication to develop and even if it is developed what would be the endgame or motivation to develop one?
Perhaps competing AI software in the future will be used to arrive at various different conclusions and a list generated for logical steps by the software can help people in many fields understand complex narratives to arrive at a more definitive conclusion. It could be beneficial to our debate over everything.