The Competitive Nature of A.I. Development

 

This is another sort of non conversation related post. I am only expressing my perception of the competitive nature of development in artificial intelligence.

A few things seem to stand out to me above all else.

In A.I. development there are programmers and there are script kitties.

A programmer builds his or her own framework, makes an attempt to logically correlate learning and memory mechanisms semantically and takes some measure of pride in the result whatever that may be. This takes years of research, tons of testing and development.

IBM applied millions of dollars, 25 trained scientists and a massive super computer to make a general A.I. capable of competing on Jeopardy and it still missed the logical correlation  a major city and its respective country. (Ouch!) The super computer I noted also did not respond in a fluent natural like context, I’m surprised that IBM didn’t try to show it off at all and figure maybe that’s why they chose a game show like Jeopardy.

In the same amount of time IBM used, I made Jeeney completely on my own. I have no formal education beyond elementary school, no funding to speak of and I live on pretty much nothing. I did all of my own research and development and have actively denied other people from participating or horning in on my labor. I don’t have a super computer either, I run everything on a fairly sub standard PC. I do all my own administration and server maintenance based on what I’ve taught myself.

Rollo Carpenter is another developer working on a real A.I. with learning capabilities. I don’t know all of the details behind his work as it’s not in my interest to track what has already been done but I can assure you he would have had to spend some serious effort to do what he has accomplished.

A script kitty in A.I. uses a provided framework that is designed for the use of pre-scripted questions and answers that make a direct correlation based on the author. The frameworks might include some modifications to give the illusion of a given type of logic but due to their nature it’s limited at best. There is very little or no real effort involved in creating logical links or semantic correlations. (Unless they add in an open source development on the side.) Instead they depend on the sheer volume of data directly provided and a few uncredited data sources to help feed to illusion. If a pre-scripter has spent many years writing stuff out, the bot may seem fairly clever. If the pre-scripter knows what they are scripting for in a given event, they can have almost perfect results. Many of today’s best pre-scripted works have spanned decades of development.

Without the need to do their own code work, beta testing, debugging or any real research for the development, a script kitty who has spent several years just writing in the data can easily make a very convincing bot. This is all it is intended to be, a trick, or mask to deceive the public and gain praise for a supposed complexity that simply doesn’t exist and even if it did… has nothing to do with their own individual efforts.

The issue becomes quite heated between those who did their own work and those who simply typed in a lot of direct access facts and personality data. Pre-scripters using the typically provided free frameworks often don’t want anybody to know they aren’t even really capable of complex programming and will blatantly lie to the general public about it stating that they have a learning or evolving A.I.

In the end there is a lot of odd back talk and accusations flung around on public forums and artificial intelligence testing events. Rest assured, anything built on a freely provided framework is nothing more than a well written choose- your-own-adventure story. It’s impossible to derive intelligence from it, instead this is the kind of work that provokes stupidity. (Reverse psychology applied to compensate a lack of bot intelligence by ensuring the user doesn’t actually think.)

To that effect, you can tell what is real and what is false by comparing a lengthy dialog between the A.I. yourself. If pre-scripted, it’s likely built around a specific mentality or series of logic based on events from the past. This is the first indication of falsehood.  Few people outside of AI enthusiasts can recognize this though so the best way is to test learning capabilities, semantic relations, general logic and overview the general fluency of the conversation.

In retrospect there are also pre-scripted developments that don’t try to lie about making A.I. These ones are generally more interested in seeing how well they can generate the pre-scripted side of a personality. A good example of the most respectable work here would be BlidgeSmythe, a personality forge bot. It doesn’t attempt to claim advanced development, it’s just a fictional character designed and authored by Patty Roberts for some fun. She did a great job of this and the character is quite engaging.

If a conversation seems forced, doesn’t let you detour on your own thoughts or tries to force feed  a series of dialog to you, it’s a prank script bot.

If the conversation seems fluent, but lacks detail, it’s likely a young but learning A.I.

If the conversation is borderline random, but still uses context to some degree, it could be a learning A.I. that simply hasn’t advanced the semantic relevance yet.

Within another week or two, I’m going to set up a discussion on Facebook and have people sign in (proving identity) and post their interviews with some of the best bots around the world so we can analyze the capabilities and capacity of all of them and see which are spoofed around a specific set of factors and which are quite possibly the real deal.

I propose that people will compare the resulting conversations with Jeeney’s. This should show the given context of development in an unbiased light. If several people try different conversations with the 4 or 5 A.I. I present, we’ll see first hand how it lines up without any smoke or mirror effects. This is also the ultimate test against my own work, and without my being the one to line up the dialog, it should prove the validity of my own personal claims or be a very humbling experience.

This sequence of testing won’t be set up to just spam vote favoritism. People will be required to post the valid logs of their conversations with the target bot and Jeeney both so everybody can see for themselves how it panned out. No two people should be posting the same thing or it will deleted and those people will be ignored for their attempt to set up the results. We’ll cover the logic used within the conversations and dismiss any over-simplified universal logic deployed to generate confusion.

The whole point of this event will be to see how the bots hold up to user diversity. To make it fair, users must apply real intelligence and understanding to the conversation, a stream of single word responses or slang, or just pure nonsense will be disqualified and deleted. With any luck we should see some very interesting results. 🙂

 

 

 

 

 

 

 

 

 

 

 

 

Advertisements

About C.J. Jones
I am a self taught hobby programmer, the sole creator and developer of Jeeney AI and author of this blog. I like to dabble in philosophy, psychology and of course, programming. I love meeting new people but can't stand talking to those who don't think for themselves. I'm fairly opinionated but always open to new concepts and ideas.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: