||The Association for the Advancement of
Artificial Intelligence (AAAI) invites you to attend the
Monday, January 26, 2015|
Hyatt Regency, 208 Barton Springs Road, Austin, TX
9:00am - 6:00pm
|Free to the public!|
The Association for the Advancement of Artificial Intelligence (AAAI) will be holding a public open house as part of their annual research conference. The public is invited to come and see
a small sample of the latest work in Artificial Intelligence, including robotics, game-playing programs, and much more.
Admission to the open house is free but please register here: movingai.com/AAAI15/register.html.
Please contact William Yeoh (email@example.com) or Nathan Sturtevant (firstname.lastname@example.org) for inquiries, or if you would like to bring a group of participants to attend this event.
Download flyer: jpg - pdf
The Future of (Artificial) Intelligence
- Speaker: Stuart Russell, University of California, Berkeley
- Time: 1:00pm
- Location: Zilker 3 Ballroom
- Abstract: The news media in recent months have been full of dire warnings
about the risk that AI poses to the human race, coming from
well-known figures such as Stephen Hawking and Elon Musk.
Should we be concerned? If so, what can we do about it?
If Machines Are Capable of Doing Almost Any Work Humans Can Do, What Will Humans Do?
- Speaker: Moshe Vardi, Rice University
- Time: 4:30pm
- Location: Zilker 3 Ballroom
- Abstract: Over the past 15 years Artificial Intelligence (AI) has made
remarkable progress. While AI has been proven to be much more
difficult than believed by its early pioneers, its inexorable progress
over the past 50 years suggests that H. Simon was probably right when
he wrote in 1956 "machines will be capable ... of doing any work a man
can do." I do not expect this to happen in the very near future, but I
do believe that by 2045 machines will be able to do a very significant
fraction of the work that humans can do. The following question,
therefore, seems to be of paramount importance. If machines are
capable of doing almost any work humans can do, what will humans do?
- CogSketch: sketch understanding for cognitive science and education
People use sketching to work through ideas and to communicate with others. We will demonstrate CogSketch, a sketch understanding system that uses cognitive models of spatial reasoning and visual comparison to understand hand-drawn sketches in human like ways. CogSketch incorporates visual processing of digital ink, qualitative spatial representations, analogical matching, and a large open-source knowledge base. It has already been used by psychologists and learning scientists for gathering data in laboratory experiments, by cognitive scientists to simulate human visual reasoning, and in classroom experiments in geoscience and engineering. We will show examples of how CogSketch can be used in laboratory and classroom settings.
Plan, Repair, Execute, Explain - How Planning Helps to Assemble your Home Theater
Institute of Artificial Intelligence, Ulm University, Germany
Modern technical devices are often too complex for many users to be able to use them to their full extent. Based on user-centered planning technology, we are able to provide advanced user assistance for operating technical devices. We present a system that assists a human user in setting up a complex home theater consisting of several HiFi devices. For the user, the task is rather challenging due to a large number of different ports of the devices and the variety of available cables. The system supports the user by giving detailed instructions and explanations.
- KU Leuven Innovation Lab for Highschool Students
Wannes Meert, Guy Van den Broeck, Jan Van Haaren
KU Leuven faculty of Engineering's Innovation Lab is an initiative to enthuse high school students to become engineers and scientists by having them build an actual real-life device. In project days at their local schools, students are challenged to design and assemble themselves hardware and software to achieve a given task, serving society:
- Build an electrooculograph eye motion sensor to control a device. The students develop their own hardware to measure the bio-potential present around the eye, design a smart algorithm to recognise the looking direction and design a game where a wheelchair moves through a maze, controlled by these eye movements.
- Build an expert system that plays poker. The students need to adapt their bot to new behaviour and strategies exposed by other bots as fast as possible to become or stay the top player.
- Fittle, a Mobile Health & Wellness App
This demo will show the AI deployed in the Fittle app. Our key technology is a coaching agent, FittleBot, who aims to provide users with social support and motivation for achieving the user's health and wellness goals. Fittle's challenges are based around teams, where each team has its own FittleBot to provide personalized recommendations, support team building, and provide information or tips. We will show how Fittle collects and creates a user model, which is used for personalizing goals and the specific activities to meet them. Studies have shown that participants using Fittle have significantly improved activity compliance and engagement.
Classifying guitar tab difficulty
UT Austin and member of Project:Possibility
Lunar Tabs is a software system that inputs guitar tablature and outputs voice read instructions for the song with the intention of helping blind people play the guitar. There are over 800k (and growing) freely available guitar tabs as such it is difficult to select a song to play based on skill level. As such, Lunar Tabs also offers a difficulty rating for songs to help users find the next appropriate musical piece. However, users vary in skill level so it is hard to objectively score difficulty. This offers an interesting Machine Learning challenge - 'how can we learn the musical strengths and weaknesses and forecast ratings that are personalized for a user'.
- We are Watson Labs
Dan Tecuci, Rob Turnkett
This will give an overview of Watson Labs, the innovation arm of the IBM Watson group, located here in Austin! Our mission is innovation in the cognitive computing space, spanning the whole spectrum from learning algorithms to new ways to interact with computers and explore large datasets. We will showcase this through several demos.
Samsung Tune: A Scalable Song Recommender System
Senior Data Scientist in Samsung Research America
This work, presents a new approach to recommend suitable songs from a collection of songs to the user. The goal of the system is to recommend songs that the user would like, are fresh to the user's ear, and fit the user's listening pattern. The user's listening pattern is analyzed to find the level of interest of the user in the next song. Also, user behavior is treated on the song being played as feedback to adjust the recommendation strategy for the next one. Furthermore, the proposed method has been implemented using Apache Spark to make the recommender system scalable.
- Computer Playing Poker
Michael Bowling, Rob Holte, Nolan Bard, Neil Burch, Michael Johanson, Trevor Davis, Dustin Morrill
University of Alberta
Come play against the latest limit and no-limit Texas Hold'em poker programs developed by the University of Alberta, winners of the 2008 Man-Machine Poker Competition and twenty-one Annual Computer Poker Competition titles! Poker is often cited as the game most like real-life, as coping with uncertainty and risk are hallmarks of good decision-making in both. This is a chance to experience the next generation of advanced artificial intelligence algorithms being developed now within the testbed of poker.
- Fuego Go Program
Martin Mueller (presented by Yeqin Zhang)
University of Alberta
The Fuego Go program is one of the leading programs that plays the game of Go. It is an open source project initiated by the University of Alberta. It has won several international competitions and was the first program to beat a top-level professional human player on a small 9x9 Go board on even terms.
- MoHex, a strong Hex player
Department of Computing Science, University of Alberta
MoHex is a strong Hex-playing program. The overall algorithm is based on Monte Carlo Tree Search, an algorithm that explores a search space by making random simulations to estimate position outcomes, exploring promising moves more often than less-promising moves. MCTS is usually enhanced in various ways. Here, we use an "all-moves-as-first" heuristic, and also incorporate a "virtual connection" algorithm, which can immediately reject many moves as not worth considering, and can solve positions many moves before the end of the game.
The result is a program which easily defeats all but the best humans on the classic 11x11 Hex board.
- 2012 BotPrize Champion: Human-like Bot for Unreal Tournament
Dr. Jacob Schrum, Risto Miikkulainen
Southwestern University, UT Austin
The famous Turing Test poses the question of whether a computer can fool people into believing it is human via a text conversation. In contrast, the BotPrize competition posed the question of whether a computer playing a First-Person Shooter video game (Unreal Tournament 2004) could convince other players it was human. The competition ran for 5 years before the question was answered in the affirmative: The bot that tricked players into thinking it was human over 5070105521f the time is presented, and now you have the chance to see if you can distinguish between the human and the bot.
- Angry Birds
Xiaoyu Ge, Jochen Renz
Australian National University
- A Multivariate Timeseries Modeling Approach to Severity of Illness Assessment and Forecasting in ICU with Sparse, Heterogeneous Clinical Data
Marzyeh Ghassemi, Tristan Naumann, Mengling Feng
The ability to determine patient acuity (or severity of illness) has immediate practical use for clinicians. We evaluate the use of multivariate timeseries modeling with the multi-task Gaussian process (GP) models using noisy, incomplete, sparse, heterogeneous and unevenly-sampled clinical data, including both physiological signals and clinical notes. The learned multi-task GP (MTGP) hyperparameters are then used to assess and forecast patient acuity.
- Leveraging Multi-modalities for Egocentric Activity Recognition
National Taiwan University
A rising trend in Egocentric Activity Recognition is the analysis of videos from head-mounted wearable cameras such as the famous GoogleGlass. However, previous work made restricting assumptions to simplify recognition. For example, the camera and the subject's visual field share the same coordinates and the time interval of each activity is the same. In this work, we build a dataset of daily life activities without above assumptions and elaborate on the difficulties. We further leverage multi-modalities to describe the subject's surroundings, e.g., objects, motion and scene information, and automatically classify what the subject is doing.
- Goal Recognition Design
Sarah Keren, Avigdor Gal
Technion - Israel Institute of Technology
Goal recognition is the task of understanding the goal of an agent by the online observation of his behavior.
Goal recognition design is a new approach which involves the offline analysis of goal recognition models (or system), by
formulating measures that assess the ability to perform goal recognition within a model and finding efficient ways to
compute and optimize them. Goal recognition design is relevant to any domain for which quickly performing goal recognition is essential and in which the model design can be controlled. Applications include intrusion
detection, assisted cognition, natural language processing and computer games.
- Incentivizing Users for Balancing Bike Sharing Systems
We discuss the challenges faced by operators of the bike sharing systems from fluctuating and unpredictable demands, leading to imbalance problems such as unavailability of bikes or parking docks at stations. We present a crowdsourcing mechanism that incentive the users in the bike repositioning process by providing them with alternate choices to pick or return bikes in exchange for monetary incentives. We deployed the proposed mechanism through a smartphone app among users of a large-scale bike sharing system operated by a public transport company in a city of Europe, and we provide results from this experimental deployment.
- A Multi-Pass Sieve for Name Normalization
University of Texas at Dallas
Often in natural language, one finds the same concept being referred to with varying names. For example, swelling of abdomen, abdominal swelling, swollen abdomen, abdominal distention, etc., are all synonymous names essentially referring to the same concept. Without a name-to-concept mapping mechanism, such varied forms of naming a concept can prove quite problematic to information retrieval systems (e.g., search engines). Name normalization facilitates precisely such a mapping.
We propose a simple multi-pass sieve framework that applies tiers of deterministic normalization modules one at a time from highest to lowest precision for the task of normalizing names. The characteristic features of this approach are that it is simple, effective and highly modular. In addition, it also proves robust when evaluated on two different kinds of data: clinical notes and biomedical text, by demonstrating high accuracy in normalizing disorder names found in both datasets.
- Building a Professor Recommendation System Using Clustering
You know the course you want to take, but will you like the teacher? We created a way to help college students find instructors with the qualities students consider most important. We devised, implemented, and assessed a recommendation system based on a hybrid of user input and peer review data. We used clustering to group Pomona College professors based on numeric ratings and descriptions of professors from an online source of student reviews. We intend for this system to help students choose professors.
- Fractal Reasoning
Keith McGreggor, Ph.D
Design and Intelligence Lab, School of Interactive Computing, Georgia Institute of Technology
Fractal reasoning is a new method for visual analogical reasoning about scenes represented as fractals. I illustrate how the fractal representation was developed, and how two human "superpowers" of visual reasoning (noticing novelty and shifting abstraction) lead to surprisingly strong results on tests of visual oddity and human intelligence.
- An Agent-Based Model of the Emergence and Transmission of a Language System for the Expression of Logical Combinations
Technical University of Catalonia, Spain
This poster presents an agent-based model of the emergence
and transmission of a language system for the expression of
logical combinations of propositions. The model assumes the
agents have some cognitive capacities for invention, adoption,
repair, induction and adaptation, a common vocabulary
for basic categories, and the ability to construct complex concepts
using recursive combinations of basic categories and
logical categories. It also supposes the agents initially do
not have a vocabulary for logical categories (i.e. logical connectives),
nor grammatical constructions for expressing logical
combinations of basic categories through language. The
results of the experiments we have performed show that a
language system for the expression of logical combinations
emerges as a result of a process of self-organisation of the
agents' linguistic interactions. Such a language system is concise,
because it only uses words and grammatical constructions
for three logical categories (i.e. and, or, not). It is also
expressive, since it allows the communication of logical combinations
of categories of the same complexity as propositional
logic formulas, using linguistic devices such as syntactic
categories, word order and auxiliary words. Furthermore,
it is easy to learn and reliably transmitted across generations,
according to the results of our experiments.
- Going Beyond Literal Command-Based Instructions: Extending Robotic Natural Language Interaction Capabilities
Human Robot Interaction Laboratory, Tufts University
The ultimate goal of human dialogue is to communicate intentions. However, these intentions are not always obvious without taking context into account. For example, "I need coffee" is probably an order for coffee if you're talking to a barista, but is probably a simple complaint if you're talking to your friend. Unfortunately, most robots are unable to make such distinctions. We present mechanisms for understanding and generating these types of utterances, and for asking for clarification when the robot is unsure how to interpret what it hears. We then provide examples of these mechanisms at work on an actual robot.
- Borrowing from Biology: Using Genetic Algorithms and Hierarchical Genetic Algorithms to Create Technology
Jennifer Seitzer, Associate Professor of Computer Science
Because technology is all around us: in our homes, schools, shopping malls, and working and recreational environments, we create and adapt technology to ergonomically fit into many physical and disciplinary places. When designing and creating technology, we literally "beg, borrow, and steal" terminology, techniques, and methods from many other disciplines. Genetic algorithms and hierarchical genetic algorithms (HGAs) comprise techniques we've borrowed from Biology. This research falls in the realm of artificial intelligence and offers a new set of tools for generation and modeling of many problems including complex adaptive emergent systems. The technique of HGAs simultaneously evolves multiple levels of solutions many of which have only been done, so far, by accident, via emergent behavior.
In this talk, we will demonstrate how these "borrowed" algorithms from Biology are miraculously eclectic in that they are used ubiquitously inside so many of our computational devices ranging from our iPhones to the cars we drove to the conference.
- Cerebella: Automatic Generation of Nonverbal Behavior for Virtual Humans
Margot Lhommet, Yuyu Xu, Stacy Marsella
- Scheherazade: Crowd-Powered Interactive Narrative Generation
Boyang Li, Mark O. Riedl
- SimSensei Demonstration: A Perceptive Virtual Human Interviewer for Healthcare Applications
Louis-Philippe Morency, Giota Stratou, David DeVault, Arno Hartholt, Margaux Lhommet, Gale Lucas, Fabrizio Morbini, Kallirroi Georgila, Stefan Scherer, Jonathan Gratch, Stacy Marsella, David Traum, Albert Rizzo
- LOL - Laugh Out Loud
Florian Pecune, Beatrice Biancardi, Yu Ding, Catherine Pelachaud, Maurizio Mancini, Giovanna Varni, Antonio Camurri, Gualtiero Volpe
- Using Social Relationships to Control Narrative Generation
Julie Porteous, Fred Charles, Marc Cavazza
- Interactive Narrative Planning in The Best Laid Plans
Stephen G. Ware, R. Michael Young, Phillip Wright, Christian Stith
- Title TBA
|Zilker 1 Ballroom and Zilker Foyer|
- Robocup Soccer Exhibition
The RoboCup competitions have promoted research on artificial intelligence and robotics since 1997. One of their main foci is the worldwide popular game of soccer, with the aim to build fully autonomous cooperative multi-robot systems that perform well in dynamic and adversarial environments.
Given the recent expansion of interest in intelligent robotics, AAAI and the RoboCup Federation, with the help of NSF, are co-sponsoring a RoboCup soccer exhibition match at AAAI-15 to showcase the state-of-the-art in robotics soccer to the broad artificial intelligence research community and spur additional interest in this exciting testbed for intelligent systems. The participating teams will be UPennalizers from The University of Pennsylvania, UT Austin Villa from the University of Texas at Austin, and rUNSWift from the University of New South Wales. Each team won a championship at the 2014 international competition (in the humanoid league, 3D simulation league, and Standard Platform League respectively). They will demonstrate a game according to the regulations of the Standard Platform League, in which all teams use identical Aldebaran Nao robots.
- Adept MobileRobots
Contact: Chad LaCroix
Robot Name: Pioneer 3-DX and Pioneer LX
Since 1995 when we launched the first Pioneer robot, Adept MobileRobots (previously known as ActivMedia Robotics and MobileRobots, Inc.) has grown to be a global leader in the design and manufacture of intelligent mobile robots. In 2010, MobileRobots Inc. was acquired by the largest industrial robotics company in the US, Adept Technology. The Adept MobileRobots academic and research division continues to provide the world's leading mobile robot platforms for mobile robotics research.
- Duke University
Contact: Heroge Konidaris
Come and meet a few members of the newly launched Duke Robotics!
- Oregon State University
Contact: Kagan Tumer
As robots become a daily part of our lives, they must learn to work closely with us in our homes and workplaces. Oregon State University is leading robotics research for the real world with new MS and PhD programs and expertise in locomotion, manipulation, decision making, human-robot interaction, and coordination.
- Texas A&M University
Contact: Dr. Robin R. Murphy
Robot Names: Survivor Buddy, AirRobot 100, Bujold, AC-ROV
Team Name: Center for Robot-Assisted Search and Rescue
The Center for Robot-Assisted Search and Rescue (CRASAR) is devoted to field research, education, and advocacy. CRASAR has participated in 17 incidents, including the 9/11 World Trade Center, Hurricane Katrina, and the Fukushima Daiichi nuclear accident. Artificial intelligence is needed throughout the data to decision process, not just for control.
- TRACLabs, Inc.
Contact: Stephen Hart, PhD, Senior Scientist
TRACLabs, located in Houston Texas, performs research and development in robotics and artificial intelligence for a variety of government agencies and commercial companies. TRACLabs is the only small company to receive DARPA funding for the DARPA Robot Challenge (DRC) Finals to be held in June 2014.
- University of California, Irvine
Contact: Ting-Shuo Chou
Robot Name: CARL-SJR
Team Name: Cognitive Anteater Robotics Laboratory (CARL)
CARL-SJR is a social assistive robot, which has a nearly full-body tactile sensory area that encourages people to communicate with it through touches and has a surface that displays animated colorful patterns. CARL-SJR is also a neuromorphic robot with a spiking neural network model for learning capability. Several social assistive games for ASD and ADHD therapy are built upon CARL-SJR.
- University of Texas at Austin
Contact: Jivko Sinapov
Robot Name: Segbots
Since September 2014, our team of 5 Segbots have traveled for over 140 km without human guidance throughout the Computer Science building at UT Austin. They can navigate autonomously, learn new words for places and objects, and even draw using a Kinova MICO arm. Come see them in action!
- University of Texas at Austin
Contact: Luis Sentis
Robot Name: Dreamer Humanoid Robot
- University of Texas at Arlington
Contact: Dan Popa
Robot Name: Phillip K. Dick