Segments of the Artificial Psychology System — Rebirth in AI #5 — Phoenixite
This is the fifth in a series of posts meant to record and disseminate the author’s thoughts on how a human personality may be kept alive after death through artificial intelligence.
Overview of this Piece
Here, I address the components of the AI personality system which Phoenixite is aiming to produce. The system, once constructed, will allow a person to create an artificial clone of themselves which may be accessed when they are absent.
An artificial copy of a human will possess that human’s personality. Therefore, I need a way to convert a person’s personality into numbers and categories which can then be encoded into a neural network.
Two relevant models exist for this purpose: the Big Five and the Myer-Briggs Type Index. The first allows us to deal in numbers, and the latter lets us deal in categories. OF the two models, the Big Five is more highly esteemed by researchers because of its greater precision.
So a human’s personality can ideally be described numerically with the Big Five model. The description of the personality is the first step in producing an AI copy of oneself.
Some people whom we may want to construct AI copies of are already dead, and they did not produce quantitative descriptions of their personalities. This raises the question of how their numerical personality profile might be constructed. I believe that lemmatization and sentiment analysis can be used for this.
Many great minds of antiquity produced writings which we still possess. We can review these writings in order to approximate their personalities according to the Myer-Briggs Index. The index can then be converted into a list of Big Five scores in order to develop a personality profile which can then be programmed.
Development of the PyPsych Library
The human brain is filled with cognitive errors and strange biases. These biases are not accounted for in any humanlike AI developed thus far. Therefore, these AI are unable to be humanlike, because they lack human tendencies.
A human commits logical fallacies, but a computer does not. Therefore, computers cannot be humanlike until they learn to think poorly.
Now, to my knowledge, no developer thus far has seen fit to produce a collection of functions which mimic human insanity. So a library of modules meant to mimic erratic human behavior must be developed in order to teach a program to think as a human would. This collection is the PyPsych library. I intend to produce it in Python 3 because this is the coding language with which I am most familiar. Presumably, the functions therein will be transferable to other coding languages.
I do not know enough about neural networks to write intelligently on this topic. I know that the layers within it will be expressed as a decision tree which is activated when a person poses a question to the AI. The question triggers a process which will produce a decision. The decision will be a sentence answering the user’s question. It will be fed to an interface, presumably a chatbot, which returns the answer to the user.The functions of the PyPsych library will be embedded within different nodes of the network. The Big Five personality score provides the weights for different decision pathways.
The network will need to use information to make its decision, and the information must be comparable to that possessed by the human of which the AI is a clone. This leads to my next point.
People with the same personality will produce different answers to questions if they possess different information. Therefore, in order for an AI clone to be similar to the parent human, it must possess knowledge comparable to that of the human. This forces us to both describe knowledge and approximate the quantity and type of the knowledge residing in the original person’s head.
I have no idea of how to do this. I’ll come back to it later.
A person’s knowledge must be approximated and kept somewhere so that the AI can draw from it during the decision-making process. Presumably, a RDBMS can be used for this.
The neural network produces a sentence in response to a question which the user of the system poses. The sentence should be grammatically correct and similar to a response which the original human would produce. Certain article rewriters have been produced in order to construct such sentences, although they are still primitive.
The sentence which the neural network returns should be verified using sentiment analysis in order to ensure its similarity to the response that the original human would provide.
The sentence must be delivered to the user. A simple chatbot can be used for this purpose.
The AI clone should be designed in such a way so that it can be easily stored, moved, cleaned, etc. Ideally, it will be a small cube which a person can hold in their hands. When the user wants to speak to the AI, then will place the cube on a table, turn on the device, pose a question, wait, and then receive an answer.
The device would be made more humanlike if it responded with a voice similar to that of the parent human. This would not be necessary for the basic function of the system, but it would be preferable to include a voice response which answers questions in a timely manner.
If it is possible to create a hologram of a person and synchronize its responses with the chatbot’s answers, then that’d be great. Again, this is a quality-of-life improvement which need not be present in the final product. However, if this can be developed, then the commerciality of the system will increase dramatically.*Note to self: Study holograms
Potential Topics for the Next Entry
- Computer voice production
- API calls
- Sentence generators
- Knowledge production
- Personality assessments from text.
Originally published at https://phoenixite.com on February 11, 2021.