AI Camp 2013

23-25th August 2013

AI in Education

10 AI Role in Education

AI in Technology

SIRI : AI Features in iPhone

AI in Film

Top 10 Movies Implementing AI

AI In Robotics

Kismet, a robot at M.I.T's Artificial Intelligence Lab, recognizes human body language and voice inflection and responds

AI in Games

Cleverbot And Turing Test

Monday, 19 August 2013

Top 10 Movies Implementing AI



Often Hollywood tends to associate artificial intelligence with dystopian futures, machines taking over the planet, and in general, becoming so deeply engrained in our lives that we are unable to realize our inevitable demise until it is much too late. Movies play upon our fears in order to evoke an emotional response. Amongst the most prevalent themes is through the advancement of artificial intelligence, machines will inevitably become self-aware, and begin thinking for themselves. Because of the global interconnectedness of modern technology, the fear is that if one machine starts to think for itself it could control all the computers in the world. Thus is the premise of movies like Terminator, iRobot, and the Matrix.

Hollywood also plays on the fact that making robots similar to humans threatens our individuality, and uniqueness. And it’s just kind of creepy. Movies like AI: Artificial Intelligence depicts worlds where humans have rebelled against the machines — that have been created, in essence, to replace humans. Now before you dive into this list, know that there can only be ten selections, and I will likely leave one of your favorite movies off, and you will be outraged and spray vile and hate all over the comments section. People may also have varying opinions on how they define “artificial intelligence”, and may hate my definition. This list is completely subjective and is in no way an exact science. So save it, and try to enjoy the list sans nitpicking.


10. I, Robot (2004)





Sure, it may not have been the best film, but it was plenty entertaining, without a doubt. This one plays on the Terminator/Skynet concept of robots becoming self-aware and ultimately thinking for themselves and subsequently, trying to protect humans from themselves. The thought being that the human race, at our current rate of self-destruction through war and overconsumption of natural resources, must be saved from itself before it goes extinct. The AI machines interpret their directive of “protecting humans” as also “protecting humans from themselves.” The robots try to take over the world by force, with the rationale that the ends justifies the means. All along the way the protagonist, Sonny, an initially suspect character, shows his true colors (and personality) as he helps the humans escape the clutches of the machines.

The best part of the movie: When Sonny lays the smack down on the bad robots as he attempts to retrieve the vial of Nanites. The entire movie he comes across as an intellectual, deep-thinker who has “dreams” like humans, until the finale of the movie when he shows some serious Ultimate Fighter skills on his overmatched counterparts.


9. Robocop (1987)





Robocop, oh yeah. Talk about the ultimate cult following. Even by today’s standards, the scene where Peter Weller gets blown to chunks is still nauseating. And who could forget the stop-motion Ed-209 that growls and later cries when it falls down the stairs? Talk about artificial intelligence gone awry, during a mock arrest the Ed-209 couldn’t identify the mock perpetrator as being unarmed, and shot him to pieces. Peter Weller, on the other hand, gets turned into a cyborg who has to will his mind into overriding his programmed directives in order to hold onto the remaining piece of himself that makes him human. Or kind of human.


The best part of the movie: I’m not sure I would call it “the best part” of Robocop, but certainly the most memorable part is the aforementioned destruction of Peter Weller. The “best scene” is probably the great shootout in the warehouse. “Come quietly, or there will be… trouble.” All things considered, this action sequence was tightly edited and delightfully entertaining.


8. Alien Series (1979 – Present)





These movies are still going, but they shouldn’t be. Without a shred of doubt, Alien and Aliens (the first two films) were the cream of the crop. The original Alien was produced for $11 million, and made a cool $105 million; quite an impressive feat for over thirty years ago. It was a remarkable leap forward in terms of visual effects, arguably on par with other sci-fi blockbusters of its time: Star Wars, Close Encounters of the Third Kind, and Jaws. The artificial intelligence aspect was not central to the movie’s plot, and as a matter of fact, came to be a surprise to everyone! No one realized that Ash was a robot until he (it?) was decapitated, shooting milky fluid and other “bot parts” all over the place. That’s what you get from trying to shove an adult magazine down Ripley’s throat, I guess.



The best part of the movie: The part with Ash getting his head ripped off was pretty good, but I think this one is a no-brainer. The dinner scene. Everyone remembers the dinner scene. As a matter of fact, Ridley Scott did not tell the other actors what was going to happen during the scene with the intention that they would show true emotional responses. I still get goose bumps when I think of Kane’s chest exploding with a baby alien popping out.


7. Star Wars (1977-present)



I would be crucified if I left Star Wars off the list. I am personally not a fan of the movies, but can surely appreciate the significance of their contribution to the world of entertainment. About as recognizable as the Coca-Cola symbol, C3PO and R2D2 headline the cast of artificial intelligence characters. Some may argue for other characters from the more recent films, but frankly, they just aren’t nearly as well done as the first three. C3PO portrays a highly emotional, and borderline neurotic but lovable humanoid-shaped bot, while R2D2 is the ultimate Swiss Army Knife, seemingly having a solution for all types of technology-related dilemmas.


The best part of the movie(s): Honestly, where do you even start? It’s really unfair to pose this kind of question for such an epic series spanning nearly five decades. For the sake of simplicity, I will funnel it down to my favorite movie with R2D2: Star Wars Episode VI: Return of the Jedi. He gets pretty torn up in the battles, but ultimately plays an integral role in defeating the Imperial troops.


6. The Matrix trilogy (1999-2003)





This collection of three movies has an insane cult following, grossing over $1.6 billion to prove it. To the point where people get physically upset if you so much as question the quality of Keanu Reeves’ acting. The AI factor in the Matrix trilogy is much darker, and more intense than many of the entries on this list. The movie plays on the well-known theory of the eventuality of machines becoming self-aware, and takes it one step further. The machines don’t just want to take over the world, they want to be able to sustain their existence. Cue human beings. The machines plug humans into a simulated reality, and use their bodies as fuel.



The best part of the movie(s): I’m partial to the first Matrix, and in particular the ending scene when Neo begins to scratch the surface of the scope of his powers. In an act of sheer necessity, he stops a wall of bullets shot by Mr. Smith. The look on Smith’s face when he knows…KNOWS he has been outmatched by a human is an all-time great scene in AI movie history.

5. AI: Artificial Intelligence (2001)





This movie produced an overwhelmingly polar response. Those that loved it, truly appreciated the pace, subtlety, and identified with the raw emotion of longing to be accepted, and loved. Those that hated it, made no bones about it; some I have spoken to went so far as to say ‘Steven Spielberg owes me two hours and twenty minutes of my life back.” Love it or hate it, this movie focuses exclusively on artificial intelligence, and makes a prediction as to how humans will respond to them becoming engrained in our lives. The newest type of bot, David, is the first of his kind created with the ability to feel and love. The only problem is that his creators built him for selfish reasons, and didn’t take into account the fact that he would never grow old mentally and physically. He would be cursed with immortality, and inevitably watch the people he loves die. He becomes a victim of prejudice and rejection, all the while he simply longs for the love of his “mother.”


The best part of the movie: David spends the entire movie, which spans thousands of years (he was frozen during an ice age), searching for the Blue Fairy from Pinocchio, who he thinks will turn him into a real boy. He believes that if he becomes human, his mother will love him. At the very end, the highly advanced machines of the future are able to bring David’s mother back for one day in a recreated world from David’s memory. “That was the everlasting moment he had been waiting for. And the moment had passed, for Monica was sound asleep. More than merely asleep. Should he shake her she would never rouse. So David went to sleep too. And for the first time in his life, he went to that place where dreams are born.”


4. Wall-E (2008)


How could anyone not love the Pixar story of a tiny trash-compacting robot that seems to develop… a personality? Wall-E indeed seems to cultivate a unique personality, with an appreciation of classical movies and an everlasting desire to hold a female’s hand. Who didn’t melt into a little puddle when he practiced by holding his own hand? Throughout the movie, Wall-E craves the one thing humans have neglected and grossly take for granted; face-to-face communication.

The best part of the movie: Undoubtedly the most emotional part of the movie is when Wall-E has seemingly died as a result of sacrificing himself to save the sole piece of evidence that Earth is habitable; a tiny plant. My favorite part of Wall-E, however, is how it draws comparisons between a little robot which seemingly acts more like a human than the actual humans, who have indeed become so reliant on machines that they have essentially become machines themselves. They have become so reliant, in fact, that their bodies have ballooned into gelatinous masses which sit in chairs all day, and communicate indirectly via technology.


3. 2001: A Space Odyssey (1968)
2001 is the oldest film to make the list and is also one of the most visually and thematically innovative. It was well beyond its time. The eerie red eye of the HAL 2000 is one of the most enduring symbols of artificial intelligence in film today. HAL demonstrates desperate survival tactics, and forces the viewer to ask the question: “Is HAL experiencing human emotions?” HAL and the concept of artificial intelligence were woven into a deeper plot which has been interpreted dozens of ways. One of the most widely accepted explanations, is that the Monolith shows us how insignificant humans are in grand scheme of the universe.

The best part of the movie: The most disturbingly memorable scene is the death of HAL. Dr. Dave Bowman works his way to the control room containing HAL’s hard drives so that he can disconnect him, all along the way HAL tries to talk Dave out of his plan. The process of disconnecting HAL takes several minutes, and all the while HAL says, “My mind is going… I can feel it.” HAL sings “Daisy” as he is dies, in an eerily human fashion.

2. Blade Runner (1982)


Few films on this list were capable of reaching the utter depth of character of “artificially intelligent characters” as Blade Runner. Rutger Hauer, and Daryl Hannah headline the “AI” cast as genetically engineered organic robots, specifically created to serve a purpose; exclusively for the benefit of humans. Unfortunately the “Replicants” were designed to live a very short period of time – likely as a safety and control measure for humans. The protagonist, Harrison Ford, is a ruthless bounty hunter of these “Skin Jobs”, whom he terminates rather clinically. It isn’t until the finale of the movie that Ford (and the audience) truly understands the plight of Hauer and Hannah. They simply wanted more time. Precious seconds of time that humans took for granted, and the Replicants killed for.

The best part of the movie:

For most, this is a no-brainer. Harrison Ford is clinging to his life, literally by his fingers on the ledge of a tall building, when Rutger Hauer (who had previously been trying to kill him), pulls him to safety at the last minute. It isn’t the act of actually saving Ford that sets the scene apart, but rather the knowing, the sheer deep longing for more life tattooed all across Hauer’s face, that completely flips your perspective of his character. “I’ve seen things you people wouldn’t believe. Attack ships on fire off the shoulder of Orion. I’ve watched C-beams glitter in the dark near the Tannhauser Gate. All those moments will be lost in time, like tears in rain. Time to die.”


1. Terminators 1 and 2 (1984 & 1991)


Out of respect for James Cameron, I won’t even include the abominations that came after. With T1 and T2, Cameron casted two of the scariest bad guys in the history of sci-fi films; the CSM-101 played by Schwarzenegger, and the T-1000 by Robert Patrick. The most frightening aspect of these characters is their (believably) relentless determination, absence of empathy, and indifference toward humankind. Cameron’s antagonists put a magnifying glass to our cultural trend towards emotionless digitization of personal interaction. The last lines narrated by Sarah Connor, “If a machine, a Terminator, can learn the value of human life, maybe we can too”, drives home the underlying theme of the film.

The best part of the movie: The finales of both movies were amongst the most thrilling, and frightening action scenes of all time. At the end of the original Terminator, Kyle Reese just can’t seem to put Arnold down. After blowing him up in a gas truck, the Terminator’s skeletal remains follow Reese and Sarah into a machine fabrication shop. Reese blows up the Terminator, again, this time with a pipe bomb. Right when you think he is down for the count, he crawls after Sarah until finally being crushed in a giant press.

This list cannot be complete without a shout out to the final couple scenes of Terminator 2; an epic helicopter, big rig, and truck chase that reaches its finale at a steel plant. Arnold climbs aboard the big rig driven by the T-1000, and wrecks it. The T-1000 gets drenched in liquid nitrogen freezing him completely. Arnold aims the gun at him, “Hasta la vista, baby”, pulls the trigger and blows him into thousands of metal pieces. In an all-time classic scary moment, the T-1000’s pieces liquefy from the heat of the plant, and reassemble. The film ends with Arnold being lowered into the hot steel, after telling John he “understands now why he cries, but it is something he could never do.”

source:
http://www.wittybadger.com/top-movies-featuring-artificial-intelligence/2/

Cleverbot


Cleverbot is a web application that uses an artificial intelligence algorithm to converse with humans. It was created by the Turkish AI scientist Rollo Carpenter, who also created Jabberwacky, a similar web application. It is unique in the sense that it learns from humans, remembering words within its AI. In its first decade Cleverbot held several thousand conversations with Carpenter and his associates. Since launching on the web in 1997, the number of conversations has exceeded 65 million.

How It Works

Unlike other chatterbots, Cleverbot's responses are not programmed. Instead, it "learns" from human input; Humans type into the box below the Cleverbot logo and the system finds all keywords or an exact phrase matching the input. After searching through its saved conversations, it responds to the input by finding how a human responded to that input when it was asked, in part or in full, by Cleverbot.[3][4] Although the commercial version of Cleverbot supports more than one thousand requests per server, the web-hosted service handled 1 or 2 people per server. This allowed more speed and quality of responses hosted by the artificial intelligence system.
Cleverbot participated in a formal, Turing Test at the 2011 Techniche festival at the Indian Institute of Technology Guwahati on September 3, 2011. Out of the 334 votes cast, Cleverbot was judged to be 59.3% human, compared to the rating of 63.3% human achieved by human participants. A score of 50.05% or higher is often considered to be a passing grade.[5]




AI and Medical

Robot Telepresence
da Vinci Surgical System - © 1998 Intuitive Surgical, inc

Artificial intelligence in medicine is a new research area that combines sophisticated representational and computing techniques with the insights of expert physicians to produce tools for improving health care.

Artificial Intelligence is the study of ideas which enable computers to do the things that make people seem intelligent ... The central goals of Artificial Intelligence are to make computers more useful and to understand the principles which make intelligence possible.

Medicine is a field in which technology is much needed. Our increasing expectations of the highest quality health care and the rapid growth of ever more detailed medical knowledge leave the physician without adequate time to devote to each case and struggling to keep up with the newest developments in his field. Due to lack of time, most medical decisions must be based on rapid judgments of the case relying on the physician's unaided memory. Only in rare situations can a literature search or other extended investigation be undertaken to assure the doctor (and the patient) that the latest knowledge is brought to bear on any particular case.

We view computers as an intellectual, deductive instrument, which can be integrated into the structure of the medical care system. The idea that these machines can replace the many traditional activities of the physician is probably. Advocators for artificial intelligence research envisions that physicians and the computer will engage in frequent dialogue, the computer continuously taking note of history, physical findings, laboratory data, and the like, alerting the physician to the most probable diagnoses and suggesting the appropriate, safest course of action

Expert or knowledge-based systems are the commonest type of AIM system in routine clinical use. They contain medical knowledge, usually about a very specifically defined task, and are able to reason with data from individual patients to come up with reasoned conclusions. Although there are many variations, the knowledge within an expert system is typically represented in the form of a set of rules.

Medicine has formed a rich test-bed for machine learning experiments in the past, allowing scientists to develop complex and powerful learning systems. While there has been much practical use of expert systems in routine clinical settings, at present machine learning systems still seem to be used in a more experimental way. There are, however, many situations in which they can make a significant contribution.


PROS

Machine learning systems can be used to develop the knowledge bases used by expert systems. Given a set of clinical cases that act as examples, a machine learning system can produce a systematic description of those clinical features that uniquely characterise the clinical conditions. This knowledge can be expressed in the form of simple rules, or often as a decision tree.
The decisions and recommendations of a program can be explained to its users and evaluators in terms which are familiar to the experts.
We can measure the extent to which our goal is achieved by a direct comparison of the program's behavior to that of the experts.
The ability to develop expert computer programs for clinical use, making possible the inexpensive dissemination of the best medical expertise to geographical regions where that expertise is lacking, and making consultation help available to non-specialists who are not within easy reach of expert human consultants.
To ability to formalize medical expertise, to enable physicians to understand better what they know and td give them a systematic structure for teaching their expertise to medical students.
The ability to test Artificial Intelligence theories in a "real world" situations.
The resulting developments in the AI sub-field of machine learning have resulted in a set of techniques which have the potential to alter the way in which knowledge is created.
AI looks at raw data and then attempt to hypothesize relationships within the data, and newer learning systems are able to produce quite complex characterizations of those relationships. In other words they attempt to discover humanly understandable concepts.
AI allows the ability to discover new drugs. The learning system is given examples of one or more drugs that weakly exhibit a particular activity, and based upon a description of the chemical structure of those compounds, the learning system suggests which of the chemical attributes are necessary for that pharmacological activity. Based upon the new characterization of chemical structure produced by the learning system, drug designers can try to design a new compound that has those characteristics.

CONS

  • Some systems require the existence of an electronic medical record system to supply their data, and most institutions and practices do not yet have all their working data available electronically.

  • Others suffer from poor human interface design and so do not get used even if they are of benefit.

  • Much of the reluctance to use systems simply arose because expert systems did not fit naturally into the process of care, and as a result using them required additional effort from already busy individuals.

  • Computer illiteracy of healthcare workers is also a problem with artificial intelligent systems. If a system is perceived as beneficial to those using it, then it will be used. If not, it will probably be rejected.

APPLICATIONS
There are many different types of clinical task to which expert systems can be applied.

  1. Generating alerts and reminders. In so-called real-time situations, an expert system attached to a monitor can warn of changes in a patient's condition. In less acute circumstances, it might scan laboratory test results or drug orders and send reminders or warnings through an e-mail system.
  2. Diagnostic assistance. When a patient's case is complex, rare or the person making the diagnosis is simply inexperienced, an expert system can help come up with likely diagnoses based on patient data.
  3. Therapy critiquing and planning. Systems can either look for inconsistencies, errors and omissions in an existing treatment plan, or can be used to formulate a treatment based upon a patient's specific condition and accepted treatment guidelines.
  4. Agents for information retrieval. Software 'agents' can be sent to search for and retrieve information, for example on the Internet, that is considered relevant to a particular problem. The agent contains knowledge about its user's preferences and needs, and may also need to have medical knowledge to be able to assess the importance and utility of what it finds.
  5. Image recognition and interpretation. Many medical images can now be automatically interpreted, from plane X-rays through to more complex images like angiograms, CT and MRI scans. This is of value in mass-screenings, for example, when the system can flag potentially abnormal images for detailed human attention.

10 AI Role in Education


For decades, science fiction authors, futurists, and movie makers alike have been predicting the amazing (and sometimes catastrophic) changes that will arise with the advent of widespread artificial intelligence. So far, AI hasn’t made any such crazy waves, and in many ways has quietly become ubiquitous in numerous aspects of our daily lives. From the intelligent sensors that help us take perfect pictures, to the automatic parking features in cars, to the sometimes frustrating personal assistants in smartphones, artificial intelligence of one kind of another is all around us, all the time.


While we’ve yet to create self-aware robots like those that pepper popular movies like 2001: A Space Odyssey and Star Wars, we have made smart and often significant use of AI technology in a wide range of applications that, while not as mind-blowing as androids, still change our day-to-day lives. One place where artificial intelligence is poised to make big changes (and in some cases already is) is in education. While we may not see humanoid robots acting as teachers within the next decade, there are many projects already in the works that use computer intelligence to help students and teachers get more out of the educational experience. Here are just a few of the ways those tools, and those that will follow them, will shape and define the educational experience of the future.
  1. Artificial intelligence can automate basic activities in education, like grading.

    In college, grading homework and tests for large lecture courses can be tedious work, even when TAs split it between them. Even in lower grades, teachers often find that grading takes up a significant amount of time, time that could be used to interact with students, prepare for class, or work on professional development. While AI may not ever be able to truly replace human grading, it’s getting pretty close. It’s now possible for teachers to automate grading for nearly all kinds of multiple choice and fill-in-the-blank testing and automated grading of student writing may not be far behind. Today, essay-grading software is still in its infancy and not quite up to par, yet it can (and will) improve over the coming years, allowing teachers to focus more on in-class activities and student interaction than grading.
  2. Educational software can be adapted to student needs.

    From kindergarten to graduate school, one of the key ways artificial intelligence will impact education is through the application of greater levels of individualized learning. Some of this is already happening through growing numbers of adaptive learning programs, games, and software. These systems respond to the needs of the student, putting greater emphasis on certain topics, repeating things that students haven’t mastered, and generally helping students to work at their own pace, whatever that may be. This kind of custom tailored education could be a machine-assisted solution to helping students at different levels work together in one classroom, with teachers facilitating the learning and offering help and support when needed. Adaptive learning has already had a huge impact on education across the nation (especially through programs like Khan Academy), and as AI advances in the coming decades adaptive programs like these will likely only improve and expand.
  3. It can point out places where courses need to improve.

    Teachers may not always be aware of gaps in their lectures and educational materials that can leave students confused about certain concepts. Artificial intelligence offers a way to solve that problem. Coursera, a massive open online course provider, is already putting this into practice. When a large number of students are found to submit the wrong answer to a homework assignment, the system alerts the teacher and gives future students a customized message that offers hints to the correct answer. This type of system helps to fill in the gaps in explanation that can occur in courses, and helps to ensure that all students are building the same conceptual foundation. Rather than waiting to hear back from the professor, students get immediate feedback that helps them to understand a concept and remember how to do it correctly the next time around.
  4. Students could get additional support from AI tutors.

    While there are obviously things that human tutors can offer that machines can’t, at least not yet, the future could see more students being tutored by tutors that only exist in zeros and ones. Some tutoring programs based on artificial intelligence already exist and can help students through basic mathematics, writing, and other subjects. These programs can teach students fundamentals, but so far aren’t ideal for helping students learn high-order thinking and creativity, something that real-world teachers are still required to facilitate. Yet that shouldn’t rule out the possibility of AI tutors being able to do these things in the future. With the rapid pace of technological advancement that has marked the past few decades, advanced tutoring systems may not be a pipe dream.
  5. AI-driven programs can give students and educators helpful feedback.

    AI can not only help teachers and students to craft courses that are customized to their needs, but it can also provide feedback to both about the success of the course as a whole. Some schools, especially those with online offerings, are using AI systems to monitor student progress and to alert professors when there might be an issue with student performance. These kinds of AI systems allow students to get the support they need and for professors to find areas where they can improve instruction for students who may struggle with the subject matter. AI programs at these schools aren’t just offering advice on individual courses, however. Some are working to develop systems that can help students to choose majors based on areas where they succeed and struggle. While students don’t have to take the advice, it could mark a brave new world of college major selection for future students.
  6. It is altering how we find and interact with information.

    We rarely even notice the AI systems that affect the information we see and find on a daily basis. Google adapts results to users based on location, Amazon makes recommendations based on previous purchases, Siri adapts to your needs and commands, and nearly all web ads are geared toward your interests and shopping preferences. These kinds of intelligent systems play a big role in how we interact with information in our personal and professional lives, and could just change how we find and use information in schools and academia as well. Over the past few decades, AI-based systems have already radically changed how we interact with information and with newer, more integrated technology, students in the future may have vastly different experiences doing research and looking up facts than the students of today.
  7. It could change the role of teachers.

    There will always be a role for teachers in education, but what that role is and what it entails may change due to new technology in the form of intelligent computing systems. As we’ve already discussed, AI can take over tasks like grading, can help students improve learning, and may even be a substitute for real-world tutoring. Yet AI could be adapted to many other aspects of teaching as well. AI systems could be programmed to provide expertise, serving as a place for students to ask questions and find information or could even potentially take the place of teachers for very basic course materials. In most cases, however, AI will shift the the role of the teacher to that of facilitator. Teachers will supplement AI lessons, assist students who are struggling, and provide human interaction and hands-on experiences for students. In many ways, technology is already driving some of these changes in the classroom, especially in schools that are online or embrace the flipped classroom model.
  8. AI can make trial-and-error learning less intimidating.

    Trial and error is a critical part of learning, but for many students, the idea of failing, or even not knowing the answer, is paralyzing. Some simply don’t like being put on the spot in front of their peers or authority figures like a teacher. An intelligent computer system, designed to help students to learn, is a much less daunting way to deal with trial and error. Artificial intelligence could offer students a way to experiment and learn in a relatively judgment-free environment, especially when AI tutors can offer solutions for improvement. In fact, AI is the perfect format for supporting this kind of learning, as AI systems themselves often learn by a trial-and-error method.
  9. Data powered by AI can change how schools find, teach, and support students.

    Smart data gathering, powered by intelligent computer systems, is already making changes to how colleges interact with prospective and current students. From recruiting to helping students choose the best courses, intelligent computer systems are helping make every part of the college experience more closely tailored to student needs and goals. Data mining systems are already playing an integral role in today’s higher-ed landscape, but artificial intelligence could further alter higher education. Initiatives are already underway at some schools to offer students AI-guided training that can ease the transition between college and high school. Who knows but that the college selection process may end up a lot like Amazon or Netflix, with a system that recommends the best schools and programs for student interests.
  10. AI may change where students learn, who teaches them, and how they acquire basic skills.

    While major changes may still be a few decades in the future, the reality is that artificial intelligence has the potential to radically change just about everything we take for granted about education. Using AI systems, software, and support, students can learn from anywhere in the world at any time, and with these kinds of programs taking the place of certain types of classroom instruction, AI may just replace teachers in some instances (for better or worse). Educational programs powered by AI are already helping students to learn basic skills, but as these programs grow and as developers learn more, they will likely offer students a much wider range of services. The result? Education could look a whole lot different a few decades from now.

SIRI : AI Features in iPhone

Siri. Your wish is its command.

Siri lets you use your voice to send messages, schedule meetings, place phone calls, and more.* Ask Siri to do things just by talking the way you talk. Siri is so easy to use and does so much, you’ll keep finding more and more ways to use it.

It understands what you say. And knows what you mean.

Talk to Siri as you would to a person. Say something like “Tell my wife I’m running late” or “Remind me to call the vet.” Siri not only understands what you say, it’s smart enough to know what you mean. So when you ask “Any good burger joints around here?” Siri will reply “I found a number of burger restaurants near you.” Then you can say “Hmm. How about tacos?” Siri remembers that you just asked about restaurants, so it will look for Mexican restaurants in the neighborhood. And Siri is proactive, so it will question you until it finds what you’re looking for.

It helps you do the things you do every day.

Siri makes everyday tasks less tasking. It figures out which apps to use for which requests, and it finds answers to queries through sources like Yelp and WolframAlpha. It plays the songs you want to hear, gives you directions, wakes you up, even tells you the score of last night’s game. All you have to do is ask.

Eyes free.

Apple is working with car manufacturers to integrate Siri into select voice control systems. Through the voice command button on your steering wheel, you’ll be able to ask Siri questions without taking your eyes off the road. To minimize distractions even more, your iOS device’s screen won’t light up. With the Eyes Free feature, ask Siri to call people, select and play music, hear and compose text messages, use Maps and get directions, read your notifications, find calendar information, add reminders, and more. It’s just another way Siri helps you get things done, even when you’re behind the wheel.

source : 
http://www.apple.com/ios/siri/

AI Features in Samsung SIII

The Samsung Galaxy S3 is an extremely powerful device. It supports some interesting artificial intelligence features.


AI in Robotics


Nao

Artificial intelligence (AI) is arguably the most exciting field in robotics. It's certainly the most controversial: Everybody agrees that a robot can work in an assembly line, but there's no consensus on whether a robot can ever be intelligent.
Like the term "robot" itself, artificial intelligence is hard to define. Ultimate AI would be a recreation of the human thought process -- a man-made machine with our intellectual abilities. This would include the ability to learn just about anything, the ability to reason, the ability to use language and the ability to formulate original ideas. Roboticists are nowhere near achieving this level of artificial intelligence, but they have made a lot of progress with more limited AI. Today's AI machines can replicate some specific elements of intellectual ability.
Computers can already solve problems in limited realms. The basic idea of AI problem-solving is very simple, though its execution is complicated. First, the AI robot or computer gathers facts about a situation through sensors or human input. The computer compares this information to stored data and decides what the information signifies. The computer runs through various possible actions and predicts which action will be most successful based on the collected information. Of course, the computer can only solve problems it's programmed to solve -- it doesn't have any generalized analytical ability. Chess computers are one example of this sort of machine.
Some modern robots also have the ability to learn in a limited capacity. Learning robots recognize if a certain action (moving its legs in a certain way, for instance) achieved a desired result (navigating an obstacle). The robot stores this information and attempts the successful action the next time it encounters the same situation. Again, modern computers can only do this in very limited situations. They can't absorb any sort of information like a human can. Some robots can learn by mimicking human actions. In Japan, roboticists have taught a robot to dance by demonstrating the moves themselves.
Some robots can interact socially. Kismet, a robot at M.I.T's Artificial Intelligence Lab, recognizes human body language and voice inflection and responds appropriately. Kismet's creators are interested in how humans and babies interact, based only on tone of speech and visual cue. This low-level interaction could be the foundation of a human-like learning system.
Kismet and other humanoid robots at the M.I.T. AI Lab operate using an unconventional control structure. Instead of directing every action using a central computer, the robots control lower-level actions with lower-level computers. The program's director, Rodney Brooks, believes this is a more accurate model of human intelligence. We do most things automatically; we don't decide to do them at the highest level of consciousness

The real challenge of AI is to understand how natural intelligence works. Developing AI isn't like building an artificial heart -- scientists don't have a simple, concrete model to work from. We do know that the braincontains billions and billions of neurons, and that we think and learn by establishing electrical connections between different neurons. But we don't know exactly how all of these connections add up to higher reasoning, or even low-level operations. The complex circuitry seems incomprehensible.
Because of this, AI research is largely theoretical. Scientists hypothesize on how and why we learn and think, and they experiment with their ideas using robots. Brooks and his team focus on humanoid robots because they feel that being able to experience the world like a human is essential to developing human-like intelligence. It also makes it easier for people to interact with the robots, which potentially makes it easier for the robot to learn.
Just as physical robotic design is a handy tool for understanding animal and human anatomy, AI research is useful for understanding how natural intelligence works. For some roboticists, this insight is the ultimate goal of designing robots. Others envision a world where we live side by side with intelligent machines and use a variety of lesser robots for manual labor, health care and communication. A number of robotics experts predict that robotic evolution will ultimately turn us into cyborgs -- humans integrated with machines. Conceivably, people in the future could load their minds into a sturdy robot and live for thousands of years!
In any case, robots will certainly play a larger role in our daily lives in the future. In the coming decades, robots will gradually move out of the industrial and scientific worlds and into daily life, in the same way that computers spread to the home in the 1980s.


Source :
http://science.howstuffworks.com/robot6.htm

Turing Test



The "standard interpretation" of the Turing Test, in which player C, the interrogator, is tasked with trying to determine which player - A or B - is a computer and which is a human. The interrogator is limited to using the responses to written questions in order to make the determination. Image adapted from Saygin, 2000.


What Is Turing Test?


      A test devised by the English mathematician Alan M. Turing to determine whether or not a computer can be said to think like a human brain. In an attempt to cut through the philosophical debate about how to define "thinking," Turing devised a subjective test to answer the question, "Can machines think?" and reasoned that if a computer acts, reacts and interacts like a sentient being, then call it sentient. 

     The test is simple: a human interrogator is isolated and given the task of distinguishing between a human and a computer based on their replies to questions that the interrogator poses. After a series of tests are performed, the interrogator attempts to determine which subject is human and which is an artificial intelligence. The computer's success at thinking can be quantified by its probability of being misidentified as the human subject.

What Is Artificial Intelligence



Artificial intelligence (AI) is technology and a branch of computer science that studies and develops intelligent machines and software.

When it comes to making complex judgement calls, computers can’t replace people. But with artificial intelligence, computers could be trained to think like humans do.



Artificial intelligence allows computers to :
  • Learn from experience
  • Recognize patterns in large amounts of complex data 
  • Make complex decisions based on human knowledge and reasoning skills. 
Artificial intelligence has become an important field of study with a wide spread of applications in fields ranging from medicine to agriculture.



Artifical Intelligence related topics






Basic Question Of AI





Q. What is artificial intelligence?
A. It is the science and engineering of making intelligent machines, especially intelligent computer programs. It is related to the similar task of using computers to understand human intelligence, but AI does not have to confine itself to methods that are biologically observable.

Q. Yes, but what is intelligence?

A. Intelligence is the computational part of the ability to achieve goals in the world. Varying kinds and degrees of intelligence occur in people, many animals and some machines.

Q. Isn't there a solid definition of intelligence that doesn't depend on relating it to human intelligence?

A. Not yet. The problem is that we cannot yet characterize in general what kinds of computational procedures we want to call intelligent. We understand some of the mechanisms of intelligence and not others.

Q. Is intelligence a single thing so that one can ask a yes or no question ``Is this machine intelligent or not?''?

A. No. Intelligence involves mechanisms, and AI research has discovered how to make computers carry out some of them and not others. If doing a task requires only mechanisms that are well understood today, computer programs can give very impressive performances on these tasks. Such programs should be considered ``somewhat intelligent''.

Q. Isn't AI about simulating human intelligence?

A. Sometimes but not always or even usually. On the one hand, we can learn something about how to make machines solve problems by observing other people or just by observing our own methods. On the other hand, most work in AI involves studying the problems the world presents to intelligence rather than studying people or animals. AI researchers are free to use methods that are not observed in people or that involve much more computing than people can do.