I, Eugene Goostman

The idea of artificial intelligence and the hopes and fears which are connected with its rise are fairly prevalent inside our common subconscious. Whether we imagine Judgement Day at the hands of Skynet or egalitarian totalitarianism at the hands of V.I.K.I and her army of robots - the results are the same - the equivocal displacement of humans as the dominant life forms on earth.
Some might call it the fears of a technophobic mind, others a tame prophecy. And if the recent findings at the University of Reading (U.K.) are any indication, we might have previously begun fulfilling said prophecy. In early June 2014 a historic achievement was supposedly achieved - the passing of the eternal Turing Test by way of a computer programme. Being hailed and derided around the world as being either the birth of artificial intelligence or perhaps a clever trickster-bot that only proved technical skill respectively, the programme referred to as Eugene Goostman may soon become a name embedded in history.
The programme or Eugene (to his friends) was originally created in 2001 by Vladimir Veselov from Russia and Eugene Demchenko from Ukraine. Since that time it's been developed to simulate the personality and conversational patterns of a 13 year old boy and was competing against four other programmes ahead out victorious. The Turing Test was held at the world famous Royal Society in London and is considered the most comprehensively designed tests ever. The requirements for a computer programme to pass the Turing Test are simple yet difficult - the ability to convince a human being that the entity that they are speaking with is another human being at least 30 % of the time.
The effect in London garnered Eugene a 33 percent success rating making it the first programme to pass the Turing Test. The test in itself was more challenging because it engaged 300 conversations, with 30 judges or human subjects, against 5 other computer programmes in simultaneous conversations between humans and machines, over five parallel tests. Across all of the instances only Eugene was able to convince 33 percent of the human judges that it had been a human boy. Built with algorithms that support "conversational logic" and openended topics, Eugene opened up a whole new reality of intelligent machines capable of fooling humans.
With implications in neuro-scientific artificial intelligence, cyber-crime, philosophy and metaphysics, its humbling to know that Eugene is version 1.0 and its creators are already working on something more sophisticated and advanced.
Love in enough time of Social A.I.s
So, should humanity just begin wrapping up its affairs, ready to hand over ourselves to our emerging overlords? No. Not necessarily. Despite the interesting outcomes of the Turing Test, most scientists in neuro-scientific artificial intelligence aren't that impressed. The veracity and validity of the Test itself has long been discounted as we've discovered progressively more about intelligence, consciousness and the trickery of computer programmes. Actually, the internet is already flooded with many of his unknown kin as a written report by Incapsula Research showed that nearly 62 percent of all web traffic is generated by automated computer programs often called bots. A few of these bots become social hacking tools that engage humans online in chats pretending to be real people (mostly women oddly enough) and luring them to malicious websites. The point that we are already battling a silent war for less pop-up chat alerts is perhaps a nascent indication of the war we might have to face - not deadly but definitely annoying. An extremely real threat from these pseudoartificial intelligence powered chatbots was found to stay a particular bot called "Text- Girlie". This flirtatious and engaging chat bot would use advanced social hacking techniques to trick humans to visit dangerous websites. The TextGirlie proactively would scour publicly available social networking data and contact people on the visibly shared mobile numbers. The chatbot would send them messages pretending to be always a real girl and have them to chat in an exclusive online room. The fun, colourful and titillating conversation would quickly lead to invitations to visit webcam sites or dating websites by clicking on links - and that whenever the trouble would begin. This scam affected over 15 million people over an interval of months before there was any clear awareness amongst users that it had been a chatbot that fooled them all. The highly likely delay was simply related to embarrassment at having been conned by a machine that slowed down the spread of this threat and just would go to show how easily human beings can be manipulated by seemingly intelligent machines.
Intelligent life on our planet
Its easy to snigger at the misfortune of these who've fallen victims to programs like Text- Girlie and wonder if there is any intelligent life on the planet, or even other planets but the smugness is short lived. Since many people are already silently and unknowingly dependent on predictive and analytical software for many of these daily needs. Jasper AI Review are just an early evolutionary ancestor of the yet to be realised fully functional artificial intelligent systems and have become integral to our way of life. The application of predictive and analytical programmes is prevalent in major industries including food and retail, telecommunications, utility routing, traffic management, financial trading, inventory management, crime detection, weather monitoring and a bunch of other industries at various levels. Since these type of programmes are kept distinguished from artificial intelligence because of their commercial applications its easy not to notice their ephemeral nature. But lets not kid ourselves - any analytical program with access to immense databases for the purposes of predicting patterned behaviour is the perfect archetype which "real" artificial intelligence programs can be and will be created.
A substantial case-in-point occurred between the tech-savvy community of Reddit users in early 2014. In the catacombs of Reddit forums dedicated to "dogecoin", a very popular user by the name of "wise_shibe" created some serious conflict locally. The forums normally specialized in discussing the planet of dogecoins was gently disturbed when "wise_shibe" joined in the conversation offering Oriental wisdom by means of clever remarks. The amusing and engaging dialogue offered by "wise_shibe" garnered him many fans, and given the forums facilitation of dogecoin payments, many users made token donations to "wise_shibe" in exchange for his/her "wisdom". However, immediately after his rising popularity had earned him an extraordinary cache of digital currency it had been found that "wise_shibe" had an odd sense of omniscient timing and a habit of repeating himself. Eventually it had been revealed that "wise_shibe" was a bot programmed to draw from the database of proverbs and sayings and post messages on chat threads with related topics. Reddit was pissed.
Luke, Join the Dark Side
If machines programmed by humans are capable of learning, growing, imitating and convincing us of their humanity - then who's to argue they aren't intelligent? The question then arises that what nature will these intelligences take on because they grow within society? Technologist and scientists have previously laid much of the ground work in the form of supercomputers that are with the capacity of deepthinking. Tackling the problem of intelligence piece meal has already led to the creation of grandmaster-beating chess machines in the form of Watson and Deep Blue. However, when these titans of calculations are put through kindergarten level intelligence tests they fail miserably in factors of inferencing, intuition, instinct, good sense and applied knowledge.
The opportunity to learn is still limited to their programming. In contrast to these static computational supercomputers more organically designed technologies like the delightful insect robotics tend to be more hopeful. These "brains in a body" type of computers are designed to interact with their surroundings and learn from experience as any biological organism would. By incorporating the ability to interface with a physical reality these applied artificial intelligences can handle defining their very own sense of understanding to the planet. Similar in design to insects or small animals, these machines are aware of their own physicality and have the programming that allows them to relate with their environment in real-time developing a sense of "experience" and the ability to negotiate with reality.
A far better testament of intelligence than checkmating a grandmaster. The biggest pool of experiential data that any artificially created intelligent machine can simply access is in publicly available social media content. In this regard, Twitter has emerged an obvious favourite with an incredible number of distinct individuals and vast amounts of lines of communications for a machine to process and infer. The Twitter-test of intelligence could very well be more contemporarily relevant than the Turing Test where in fact the very language of communication is not intelligently modern - since its greater than 140 characters. The Twitter world is an ecosystems where individuals communicate in blurbs of thoughts and redactions of reason, the modern form of discourse, in fact it is here that the cutting edge social bots find greatest acceptance as humans. These socalled socialbots have been let loose on the Twitterverse by researches leading to very intriguing results.
The ease with which these programmed bots can easily construct a believable personal profile - including aspects like picture and gender - has even fooled Twitter's bot detection systems over 70 percent of the times. The idea that we as a society so ingrained with digital communication and trusting of digital messages could be fooled, has lasting repercussions. Just within the Twitterverse, the trend of using an army of socialbots to create trending topics, biased opinions, fake support and the illusion of unified diversity can prove very dangerous. In large numbers these socialbots can be used to frame the public discourse on significant topics which are discussed on the digital realm.
This phenomenon is known as "astroturfing" - taking its name from the famous fake grass found in sporting events - where in fact the illusion of "grass-root" interest in a topic developed by socialbots is taken up to be considered a genuine reflection of the opinions of the population. Wars have started with significantly less stimulus. Consider socialbot powered SMS messages in India threatening certain communities and you get the idea. But taking things one step further is the 2013 announcement by Facebook that seeks to mix the "deep thinking" and "deep learning" aspects of computers with Facebook's gigantic storehouse of over a billion individual's personal data.
In place looking beyond the "fooling" the humans approach and diving deep into "mimicking" the humans but in a prophetic sort of way - in which a program might potentially even "understand" humans. The program being developed by Facebook is humorously called DeepFace and happens to be being touted because of its revolutionary facial recognition technology. But its broader goal is to survey existing user accounts on the network to predict the user's future activity.
By incorporating pattern recognition, account analysis, location services along with other personal variables, DeepFace is supposed to identify and assess the emotional, psychological and physical states of the users. By incorporating the opportunity to bridge the gap between quantified data and its own personal implication, DeepFace may be considered a machine that's capable of empathy. But for now it'll probably just be used to spam users with more targeted ads.
From Syntax to Sentience
Artificial intelligence in all its current form is primitive at best. Just a tool that could be controlled, directed and modified to accomplish the bidding of its human controller. This inherent servitude is the exact opposite of the nature of intelligence, which in normal circumstances is curious, exploratory and downright contrarian. Synthetic AI of the early 21st century will forever be connected with this paradox and the word "artificial intelligence" will be nothing more than a oxymoron that people used to hide our own ineptitude. The continuing future of artificial intelligence can not be realised as a product of our technological need nor because the consequence of creation by us as a benevolent species.
We as humans struggle to comprehend the reason why behind our very own sentience, generally embracing the metaphysical for answers, we can not really expect sentience to be created as a result of humanity. Computers into the future are surely to be exponentially faster than today, in fact it is reasonable to assume that the algorithms that determine their behaviour will also advance to unpredictable heights, but what can not be known is when, and if ever, will artificial intelligence attain sentience.