Institute of Mimetic Sciences

Institute of Mimetic Sciences Inc.

Mimetic Synthesis is a new terminology that more accurately describes a programming methodology used to mimic human behavior in a computer such as a PC. Previous work in this field has been incorrectly categorized under various aspects of Artificial Intelligence (AI).

Mission:

to imitate human behavior with computers.
Computational Behaviorism.




Robby Garner and Dr. Hugh Loebner, June 2002. See Blather article How I Failed The Turing Test Without Even Being There.




Dr. David C. Hamill and Robby Garner circa 1998.

Institute of Mimetic Sciences is a non-profit corporation dedicated to research and development for the imitation of human behavior in machines. Our members are multidisciplinary artists, scientists, engineers, and theorists. We believe that machine performances may be built one aspect at a time, rather than wait for some general theory or "singularity" to occur.

Robby Garner began his work with natural language programming working from first principles. This turned out to be an asset as much of the work that has gone on during the years by others has been abandoned or is now working in parts that have been solved, or could be applied in one way or another.

https://robitron.com for music library   We are available to consult on music software, equipment, Artificial Intelligence methods and cloud computing principles. Please visit our shop on Facebook to read about our current projects and product offerings.

We also make music and are an independent record label. Our music has been described as "a strange brand of synthpop with rock and industrial overtones." You can get some good deals in lossless formats like FLAC, wav, mp3 and others at Bandcamp.com. Our band, Flux Oersted is famous for its post punk synth wave sound and spirit. Look for them in any place that digital music is found. Amazon, iTunes, YouTube, Spotify, and Deezer are just a few of the places where you can find our music.


Sara Garner CFO

Chat Bot Basics:

There are 3 concepts that apply when using a chatbot as an information retrieval tool.

1: Input Filtering: A chat bot can make a distinction between general chat and serious application usage. It can draw from a resource of conversation methods geared towards steering the chat back on topic.

2: Categorization: A chat bot can take an input and make decisions about which resource would best handle the response. If you are working with multiple knowledge sources, the chatbot can distinguish between a mathematical question, a database query, or a call to another bot.

3 Hierarchy: Chatbots can work in such a way as to go from most specific requests, down to keyword inquiries.

3a: The first layer takes specific stimuli like "get a lithium level on the patient in room 238." And, can learn these kinds of responses in a very specific trainer assisted way to give the most specific of replies or responses. The next level is the AIML level, or the condenser level, and is usually characterized by the * character. Get bloodwork on patient 238. might be represented by "Get bloodwork on patient *" This level of abstraction is less specific than the level above, but can give a correct result if leveraged with input filtering to make sure this is a command level query.

3b: Key Phrase results. "Give me the status of my patients" Might be characterized by the key phrase "Give me the status *" or in JFRED, a regex rule might be used to reach this level of specificity.

3c: Key word results. "What is the Lithium level?" might key on the word lithium and respond with "Which patient are you referring to?" with the expectation that a number will follow.

Using "Trainer Assisted Learning," responses may be learned on the fly by a trainer interacting with the conversational system. Some of these may be learned by an untrained expert, given a few lessons in consistency. For a chatbot, data consistency, and redundancy are key allies to being able to decipher natural language commands, queries, or statements.

Command Performance

Through conversation with a chatbot, a human becomes the star of an interactive story that they help to write. Rather than limit our description of chatbot behaviour to simply dialog, film theory terminology adds the required support for the emotional content and the periphery found in a text conversation. Suspension of disbelief comes into play when an interlocutor believes they are chatting with another human. Alfred Hitchcock’s use of “the MacGuffin” applies to the goals of the programmer. The human is generally not interested in those, depending on whether or not he/she thinks they are talking to another human. Montage effect, first identified by the Russian film theorist Sergei Eisenstein, applies to the utterances of a chatbot.

Most people have an opinion about what the word conversation means to them. One view is:

Why FRED?

The essential element of a conversation with a chatbot is that the person cares if the chatbot understands what was said. The person is looking for sentences that convey enough meaning so that the person can feel there is a point to the chat. If the human, gets the sense the chatbot does not understand or is randomly producing sentences the person shuts off and resorts to testing sentences not conversation. The response should work with the reality of the viewer.

The person stops participating with the imagined human when suspension of disbelief is broken.

Film Theory

Years ago, before I had any experience with Turing tests, I worked with a colleague named Paco Nathan. He had one of the first online bookstores around 1995 and we experimented with a chatbot. It was a C++ program called FRED at first, but later was developed with Java and that was called JFRED, and a data format we called JRL. I noticed conversation logs where a person would have a great time chatting, and eventually say “goodbye.” These were happy accidents. They were flukes where the right thing said at the right time would cause the person to open up and chat rather than interrogate (Caputo, Garner &Nathan, 1997).

When these “happy accidents” occur, there is a reason why our minds perceive they are chatting with a human and do not realise the bot was a machine. Some people believe they are very good at it. If you asked the average person if they were good at talking with chatbots, most of them would not know. But the expert chatbot talker might not make the best chat participant from the perspective of the chatbot developer. It is a young technology that has not been considered a form of literature. It is interactive fiction on some level, but is also a simulation of a conversation to some. “If a person does not see there is a point to the conversation they will not engage” (Burke, 2013). This is true perhaps except when the person’s point is merely to engage and see what happens. As a chatbot developer, I am interested in what does and does now work.

Some would say that it mostly does not work and that this is fair. However, for 20-30% of the normal population, better results are found in Shah (2012). In a collection of conversations comprised of around 2,000 online Turing test simulations conducted at The Turing Hub, we saw that the JFRED bot gave reports of 5% “not working.” So 95% of conversations surveyed were considered to have served some purpose for the human chatter. Seven per cent reported that they had been speaking with another human being. A higher percentage, 20% ranked their conversation as “sort of human.” (Garner, 2013)

In cinema, suspension of disbelief happens when a person is watching a movie and they forget that it is not real. When talking to a chat bot, the bot does not deceive; the people let themselves forget it is a computer program. (If they are among the few people this actually happens to.) When people go to the cinema and they enjoy it, they may know the whole time that it is made by a movie studio, but from time to time, they may find themselves forgetting about the machinery, and focusing on the story, the dialogue, and the characters. Sometimes something similar happens with chatbots.

2005 Colloquium
University of Surrey, Guildford, UK, 2005 Colloquium on Conversational Systems

Bibliography

  • The Intelligence of an Entity (White paper) NIST Workshop, SB Henderson, RG Garner, 2000.
  • The Turing Hub as a Standard for Turing Test Interfaces Parsing the Turing Test, Springer, 2009.
  • Film Theory and Chatbots International Journal of Synthetic Emotions, IGI, 2014.
  • Cold Red Eyes Of Home "Fred built Sydney as a robot companion, but his job required him to build the world's most sophisticated artificial intelligence for the department of defense. Somehow, Fred got them mixed up, and signed on to a vacation that few could hope for. Sydney could appear as anything to anyone, but chose to take Fred's place at work. Fred didn't mind and nobody else had to know." Kindle Edition.

Video Lectures

Applied Technologies

  • Have a Chat with Sandra the experimental LEX bot. This is part of our proof of concept for phase2: Pocket Pen Pal

  • Sandra is modelled after a twenty something year old human being who could be subscribed to Instagram or something like that. To get the full experience, engage the conversation as if you are just meeting someone who texted you, asking for a chat with you.

  • For Java Programmers, check out the Sapphire Chat Bot. Based on JFRED/EARL all included. Works in Windows, linux, Mac OS X. A very good verbal behavior model.

  • Coyo Zuma-Shōjin pyFRED
  • Robitron Software
    • DocuFlex User's Manual Synthesizer (Pascal / C++) NASA, Raytheon, TRW
    • Probate and Traffic Court Docket with Encumberance Accounting, various permits and periodic reports.
    • Utilities Billing with HandHeld Interrogators
  • Flux Oersted Music Psychoacoustic Recordings
  • EllaZ Systems Artful Intelligence

The Institute of Mimetic Sciences is a non-profit 501(c)(3) Georgia corporation.

Copyright ©2022 Robby Garner